View and download the white paper here.
Join us on March 26 for our webinar on the report. Register here.
As the potential impact of generative AI (GenAI) on health in low- and middle-income countries (LMICs) continues to be explored, two key questions emerge: Where is GenAI currently being used most effectively, and how can its full potential be unlocked for greater impact, both for behavior change and broader healthcare applications?
A newly released white paper, Generative AI for Health in LMICs, sheds light on these questions, offering insights from a comprehensive review led by the Stanford Center for Digital Health (CDH) with support from Advancing Health Online (AHO) and the Bay Area Global Health Alliance as lead dissemination partner.
The research, conducted between August and December 2024, draws on findings from two roundtable discussions, in-depth interviews with dozens of experts, a survey of more than 100 respondents, and an analysis of 14 GenAI accelerator programs that have collectively supported more than 250 health projects worldwide.
While the potential of GenAI in healthcare is significant, experts caution that its risks must be carefully weighed.
“The big question mark that remains is, are the risks associated with using a GenAI-based tool outweighed by the benefits of what you can now achieve?” said Bilal Mateen, chief AI officer at PATH.
Despite these concerns, many see GenAI as a transformative tool, particularly in improving the personalization and effectiveness of health-related communication.
“The ability of GenAI to be much more nuanced and talk much more directly to the user’s specific question is really exciting … I think that’s going to be a real step change,” said Isabelle Amazon-Brown of The MERL Tech Initiative.
The report highlights five key areas for realizing GenAI’s full potential in LMICs:
- Share learnings. Stakeholders want to learn more from others’ experiences; this is especially important given how quickly technology and applications are evolving. Specific needs included: (a) understanding of the types of tasks large language models (LLMs) are well suited to, their weaknesses, and strategies to address; and (b) summaries of specific successes, with concrete case studies reporting on comparable outcome metrics.
- Focus on actionable measurement. Stakeholders wanted better ways of measuring benefits, costs, and risks, in ways that provide rigorous but also timely data. For example, funders cannot wait 3 years for results of a randomized controlled trial to guide annual investment decisions, but we still need scientifically valid ways to measure success to inform implementation decisions in the interim. Establishing a clear evidence base will also be essential for supporting government decisions to implement successful applications at a national scale.
- Improve language and localization. The quality of models varies by language, by medium (with voice particularly important for low-literacy settings) and by use case (e.g., health-specific contexts).The fact that large language models are not trained on or fluent in local languages was the most commonly selected barrier to using GenAI in healthcare settings in LMICs in our quantitative survey. The importance of identifying and closing gaps in quality is highlighted as a key next step.
- Improve technical capacity and shared infrastructure. Experts noted that technical capabilities of GenAI implementers varied dramatically. Similarly some funders and health system leaders identified gaps in their own knowledge that, if addressed, would allow them to make more impactful funding and procurement decisions. They also noted that some technical barriers (e.g., language models) likely would be better addressed centrally vs. in a fragmented way.
- Improve digital and basic health infrastructure. Throughout the research, the risk of inadvertently perpetuating the digital divide emerged as a key concern. No matter how advanced AI models and datasets become, their potential to effect behavior change is wasted if the people who need them most cannot access the necessary digital or physical infrastructure (e.g., lack of stable internet connection or access to healthcare facilities recommended by GenAI chatbots).
Capturing and sharing lessons learned will be critical to ensuring GenAI’s success, according to Topaz Mukulu, strategy analyst at the Patrick J. McGovern Foundation. “Success also means generating the evidence and learning that can inform future efforts, whether it’s for your organization or just the field at large,” Mukulu stated.
“The report serves as a call to action for researchers, funders, and policymakers to collaborate in shaping the future for GenAI in global health,” shared lead principal investigator, Dr. Eleni Linos.
View and download the white paper here.
Register here to join us virtually on Wednesday, March 26 from 8:30-10:00 AM PT as we:
- Review the five key findings with principal investigator, Dr. Eleni Linos
- Hear from implementers in the field, and
- Invite perspectives of funders looking at how to fill the gaps