At Erasmus MC, dedicated efforts are being made to develop ethical frameworks for AI in healthcare. The initiative ‘Responsible and Ethical AI in Healthcare Lab’ (REAHL) addresses the ethical challenges that arise from artificial intelligence in healthcare settings. We spoke about the initiative with Internist-Intensivist Michel van Genderen.
In May 2023, tech partner SAS announced that it would join the lab alongside Erasmus MC and TU Delft. TU Delft is involved because of its expertise in technical implementations. According to Van Genderen, Erasmus MC had already been working on the responsible use of AI in healthcare before 2023. Two years later, significant progress has been made in research into AI adoption and the challenges faced by leading European hospitals and healthcare systems.
Shortly before our conversation, Van Genderen gave a keynote to a packed audience at SAS Innovate 2025, clearly stating what REAHL is all about: "As an ICU physician, there is nothing more fulfilling than saving the life of a critically ill patient and seeing them return to society. However, a major challenge in my work is that I must make high-impact decisions in a high-pressure environment," said Van Genderen.
Increasing pressure on healthcare
In recent years, the pressure on healthcare providers has only increased, both due to staffing shortages on one hand and increasing demand for care on the other. Van Genderen emphasizes that investing in data-driven technologies and AI-based solutions can help address these challenges. "But let me be clear: I will only use AI when I am confident that it is safe, explainable, and reliable. In my work, a decision can mean the difference between life and death."
The Dutch lab has therefore committed to the ethical principles established by the World Health Organization. Together with the WHO, TU Delft, and SAS, REAHL is now working on translating abstract ethical values into concrete, practically applicable guidelines.
TU Delft’s role—through its technical expertise—is especially worth highlighting here. Earlier this year, the university received WHO accreditation for its Digital Ethics Centre. It now serves as a "WHO Collaboration Centre in the field of Ethics and Governance of AI in Healthcare." The university advises the World Health Organization on ethical aspects and regulations related to AI in healthcare.
From theory to practice
Within REAHL, ModelOps is a key technical discipline. It enables the lab to monitor and evaluate models, allowing for basic functionality to be assessed. It also ensures that models are explainable, transparent, and reliable.
The question of when an organization is ready for AI is crucial. That’s why REAHL has launched a transatlantic collaboration initiative between hospitals in the U.S. and Europe. "There are clear regional differences, but we need to understand that the global challenges in developing and implementing responsible AI at the patient bedside are the same," says Van Genderen.
At Erasmus MC, due to its high level of expertise, the most critically ill patients often come in. This brings challenges, for example, in determining treatment strategies. Van Genderen saw opportunities here to change the way of working. Staff had to become familiar with AI-based solutions. "Together with Microsoft, SAS, and Notilyze, we developed a real-time dashboard for mechanical ventilation performance that provides us with continuous insights," says the physician. "The number of patients receiving optimal mechanical ventilation settings doubled when both nurses and doctors tailored their treatments based on the data input."
Five key recommendations for AI implementation
Five major recommendations have now been developed to assess the maturity level of an AI hospital. These relate to Model Inventory, Real-world Data Testing, Governance, Bias Assessment, and Adoption & Scalability. Van Genderen elaborates on the AI Model Inventory: "Every organization, including hospitals, must always know which models are currently in production—but perhaps even more importantly: what is their intended use?"
Together with SAS and the WHO, REAHL has published work emphasizing the urgency of establishing such a register.
“This is crucial because such a register helps us track model implementation, validate transparency, and share best practices. In our field, this is a given—it’s essential for building trust and accountability. Without trust and understanding, clinicians will never use it at the bedside when making clinical decisions.”
Van Genderen also elaborates on Bias Assessment: “Every doctor always wants to know which models are running and whether the current model works in this specific situation for this specific patient at this moment,” he says.
The key to success
Previously, Erasmus MC developed a model intended to help determine whether it was safe to discharge a patient after a major oncological surgery. “Luckily for us, this took place in a highly secure research environment,” Van Genderen explains. “During this phase, a background data table shifted due to a hospital update, and model performance dropped. Yet, it continued to confidently make recommendations.”
The hospital was able to catch this in the research setting. Had it been rolled out more broadly in practice without model management, it likely would have gone unnoticed. In healthcare, such serious consequences are unacceptable. “It’s not about just doing AI—it’s about doing it responsibly,” Van Genderen concludes.
“But to enable responsible use of AI, you need to have evaluation and monitoring in place.”
The message from REAHL is clear in that regard: Responsible AI in healthcare starts with responsible developers, and success depends on multidisciplinary collaboration that brings together ethics, technology, and clinical expertise. If done right, AI has the potential to advance healthcare far beyond Dutch borders.
SOURCE: https://www.ictmagazine.be/blogs/erasmus-mc-verwezenlijkt-verantwoorde-inzet-van-ai-in-de-zorg/
Translated by Notilyze