As artificial intelligence (AI) in healthcare delivery accelerates, the imperative to prioritize patient safety cannot be overstated. Integrating AI into clinical settings presents a revolutionary opportunity to enhance the quality of care, patient safety, and access to healthcare while managing costs. However, it also introduces complex challenges requiring rigorous scrutiny to ensure these technologies do not inadvertently perpetuate biases or compromise patient well-being.
In an interview with JAMA Editor-in-Chief Kirsten Bibbins-Domingo, Marzyeh Ghassemi, Ph.D., an assistant professor at MIT, sheds light on the critical considerations for developing and deploying AI in healthcare. Ghassemi's work at MIT, focusing on creating "healthy" machine learning (ML) robust, private, and fair models, underscores the importance of designing AI applications that function effectively across diverse settings and populations. This approach is vital to mitigate the risks associated with AI-generated clinical advice and to ensure that such advice does not harm patients.
This responsibility highlights ethical machine learning as a critical concept that technologists must consider during product development. This ethical framework involves recognizing biases in AI models and striving to mitigate them, ensuring that models perform equitably across different groups. Ghassemi points out that biases in problem selection, data collection, and algorithm development can lead to disparities in AI's effectiveness among diverse populations.
|