The role of AI in healthcare


As part of our ongoing discussion series, Under the Same Sky, Abner Mason, founder and CEO of SameSky Health, met with Dr. Leo Anthony Celi who is the Clinical Research Director and Principal Research Scientist at the Laboratory for Computational Physiology at MIT and a practicing intensivist at the Beth Israel Deaconess Medical Center in Boston. Dr. Celi’s work focuses on scaling clinical research to be more inclusive through open access data and software. His particular focus is on limited resource settings, identifying bias in the data to prevent them from being encrypted in models and algorithms, and redesigning research using the principles of team science and the hive learning strategy. The conversation was centered around the role of artificial intelligence (AI) in healthcare, and the opportunities and challenges that Dr. Celi is concerned about as the healthcare industry moves full speed ahead with AI investments and initiatives.

According to Dr. Celi, data and learning need to be at the frontline of healthcare because it’s the best chance for improving the disparities we are seeing now. However, just pushing new technology isn’t going to work; the people that are on the ground must be the ones identifying the problems that need to be addressed. They must also be the ones designing the solutions, and engineering those solutions into products. View the recording.

I think the promise of AI is the promise of every health product, which is to improve health outcomes. But I think that there is a tagline: To improve outcomes equitably so that everyone benefits, and everyone is given the opportunity to reach the best health status given the conditions and context. That’s what we’re excited about when it comes to the application of data science and AI in healthcare.”
— Dr. Leo Anthony Celi

Unfortunately, the data we have now isn’t optimal in terms of being able to design the equitable algorithms we need to deploy in clinical practice. The current body of digital content that large language models are being trained on is not representative of the world, and the data generation process reflects the structural inequities that we see in society today. They come from research performed in well-funded institutions in wealthy countries. Dr. Celi’s team learned that the current models are amplifying the stereotypes we already have, and if we aren’t careful, all these biases and stereotypes will be exponentially applied to decision making. 

Last year, Dr. Celi *published a paper with other colleagues focused on the disparate abilities of AI to detect a person’s race with no known correlation on medical imaging that would be obvious to human experts when interpreting the images. Yet, the AI was able to recognize a patient’s identity and ethnicity from those medical images. This is concerning because computers are quickly figuring out how to identify these invisible features when we’re training them based on human data, and we know that as humans we have implicit biases. Most of the time it’s not intentional, but there are stereotypes in our brains because of our experiences, and those cloud the way we make decisions. If computers can tell if you are a man or woman, binary or non-binary, black or white, or rich or poor, based solely on how humans have built the algorithms, then chances are, they’re going to use those features to make decisions as well. 

So, what can we do to ensure that the healthcare industry gets the technology right, doesn’t introduce bias, and doesn’t build on top existing, potentially flawed data? Dr. Celi mentioned three important lessons that can help:

  1. Competition kills collaboration which we need to address complex problems. When you only work with people who look like you and have the same training as you, you are unlikely to make any headway because you’re going to have the same blind spots. You see the problem from the same angle, which is typically not a very comprehensive landscape of what needs to be done. The people that created the problem in the first place, and the people who contribute to perpetuating the problem, very seldom can come-up with a solution. This is the reason why we need to push for diversity in all the seats around the table. 

  2. Instead of focusing on the what of AI, we need to focus on the who and the how. 

    • Who is developing and deploying AI, who gets to sit at the table, who sets the research agenda, and who sets the trajectory of AI? If we do not listen to the voices of the people who are more likely to be impacted by these technologies, then the greater the chances that we won’t be able to predict the downstream consequences. 

    • The “how” is truly trying to promote data sharing, crowdsourcing, the curation, and the analysis of the data. We need to overhaul the approach to research because the one that’s in existence currently has been too complicit. The focus on data sharing, transparency, and accountability is necessary to deliver the promise, the opportunities, and the hype of these technologies.

  3. Regulation and incentives are going to play a huge role. The Centers for Medicare and Medicaid Services (CMS) is pushing for a stronger agenda for health equity. By January 2024, reimbursements will be tied to the health equity performance of the health systems. If CMS adopts this, then it is likely going to get adopted by private payers too. The Joint Commission is also going to add health equity into their report cards. Things are not going to change unless there are financial incentives.

If we’re just going to predict an outcome based on what we’re seeing in the real world, then we’re doomed; we’re stuck with all the inequities we’re seeing. This is more than collecting gobs and gobs of data from every health system, and then putting them into some computer and learning the association between variables and some outcome or complication. […] If we don’t do something different, then the real-world data becomes the real world of the future. And that, to us, is the biggest concern. […] We need to be thoughtful on how we deploy these new capabilities, so we don’t reinforce these biases in technology.”
— Dr. Leo Anthony Celi
Share this post:
 

Related


SameSky Health

This post was written by the SameSky Health marketing and communications team.

Previous
Previous

Ensuring AI is developed without bias in healthcare

Next
Next

Strategies to address new TCPA rules for interactive voice response (IVR) solutions