Showcase from the AI in the Global South seminar: Project #1 - Improving Digital Health Literacy using Responsible AI
Problem & Context
"Ali ji is a 45-year-old farmer living in rural Pakistan. He's never been to school, but he works hard every day to provide for his family. Two weeks ago, after suffering from persistent chest pain, he traveled 5 kilometers to the nearest hospital. The doctor handed him a report filled with numbers and medical terms like LDL, triglycerides, HDL, and told him he has high cholesterol. Ali ji doesn't know what any of this means. He can't read the report, can't pronounce the words, and doesn't know what to do next. All he feels is fear."
Millions of people in the Global South struggle to understand their medical reports. Whether it's a blood test, prescription, or discharge summary, medical jargon makes health information confusing.
For example, in rural Pakistan, nearly 40% of people can't read or write fluently. Even if health information is available online, it's often designed for literate, urban audiences. The result? Missed treatments, medication errors, and unnecessary health risks.
According to WHO estimates, over 730 million people in low-income countries face barriers to interpreting basic health information, directly impacting treatment adherence and outcomes. Addressing this aligns with UN SDG 3: Good Health and Well-Being, by improving health literacy and empowering communities to act on medical guidance.
MedExplain is here to change that. Using AI, it turns complicated medical language into simple, clear explanations, even reading them out loud in local languages to match the person's comprehending ability.
Understanding the Context
Reports from the WHO and UNICEF show a common pattern in the Global South: poor connectivity, low literacy, and skepticism toward digital tools. Field studies in rural Pakistan, Kenya, and Nigeria reveal that people often rely on family or informal practitioners to interpret medical reports sometimes leading to confusion or mistakes. Healthcare workers note that audio explanations, local dialect support, and simple visuals boost understanding and engagement. Since many communities have spotty internet, SMS and WhatsApp remain the most reliable ways to communicate. Any AI solution, therefore, must be accurate, medically validated, human-centered, and designed with trust, empathy, and local the cultural-sensitivities in mind.
Why is this important?
Understanding cultural sensitivities is crucial when designing digital health solutions because healthcare is deeply personal and shaped by local beliefs, languages, and social norms. What feels trustworthy or respectful in one culture might seem intrusive or confusing in another. By respecting cultural values and communication styles, AI tools can build trust, improve adoption, and ensure that health information is not only understood but also acted upon. Ultimately, cultural awareness transforms technology from a foreign intervention into a trusted community ally.
To understand these realities, our team reviewed WHO and UNICEF field assessments, interviewed two rural people in India and Pakistan, and analyzed case studies on mobile health adoption in India, Kenya, and Nigeria. These insights directly shaped our feature priorities and design constraints.
Literature Review
1. The Critical Gap and Foundational Barriers in the Global South
Improving public health in the Global South is not just about making medical services available, it's about making health information understandable. Millions struggle with digital health literacy, meaning they can't fully access, read, or act on medical advice [1]. This leads to serious consequences: on average, 25% of patients fail to follow prescriptions or lifestyle guidance [3]. Tools that simplify medical instructions and adapt to each user's literacy level are essential [2]. Yet, previous digital health efforts often stumble due to high costs, poor transport, and unreliable internet [4]. Any solution must be affordable, low-bandwidth, and designed for real-world rural conditions.
2. Building Trust and Personalization
For AI to be widely adopted in communities wary of technology, it must integrate human values like compassion and empathy [7]. Design cues, such as a female-doctor persona, significantly boost trust ; especially for users with low health literacy [8]. LLMs have the ability to simplify complex medical documents and adapt them to the user's reading level, and provide tailored actionable, hyper-personalized advice, like dietary recommendations based on biomarkers [5, 9]. Clinical studies show this approach can reduce IBS symptom severity by 39% and improve diabetes outcomes [9].
3. Safety and the Verification Loop
While LLMs can make medical information clearer and more accessible, they must be used cautiously due to risks like hallucinations, bias, and misinformation, hence, accuracy is critical [5]. The AI must be medically validated and default to "I don't know" when unsure [2]. To prevent errors or omissions, uncertain queries "should be" routed to a medical team for review via a Mandatory Verification Loop. The system should ensure continuous auditing, mitigate bias, and over time build a medically-verified, locally relevant knowledge base ensuring safety, ethical compliance, and trustworthiness [5, 6, 10]. This system ensures continuous auditing, mitigates bias, and builds a verified, locally relevant knowledge base ensuring safety, ethical compliance, and trustworthiness.
While prior research demonstrates strong potential for LLM-based medical explanation, few studies address low-bandwidth access or integrate a continuous human-in-the-loop verification system. Our contribution lies in combining these gaps to create a trustworthy digital health tool for underserved populations.
This leads to serious consequences: on average, 25% of patients fail to follow prescriptions or lifestyle guidance [3].->This leads to serious consequences: on average, 25% of patients fail to follow prescriptions or lifestyle guidance [3,11].
Competitive/Peer Analysis
There are many existing AI powered Platforms which are using AI to simplify the medical information for the common users, Docus AI, for example helps users understand the complex medical reports, users can upload lab reports and the app uses AI to translate results into easy human friendly language, highlighting abnormal values and risks.
Another such app, Lab Informer, generates AI driven insights and trend analyses based on the uploaded test reports. AI DiagMe for blood test interpretation helps users to identify any health risks using ML, while mySmartblood analyzes biomarker patterns to estimate disease probability and biological age. Lastly, apps like LabSimplify helps users in extracting information from medical report images, and turning them into structured, easy-to-read summaries.
Unfortunately, these tools are designed primarily for the literate and digitally connected users in the Global North. These tools rely on the assumption of the users having a reliable and fast internet connection and English proficiency, with no support for local language or audio. To overcome these barriers, MedExplain is designed to be a multilingual, voice enabled, low-bandwidth platform.
Product Plan and Adaptation
MedExplain is different from the other products because it is multilingual, works with low-bandwidth, and voice enabled. It simplifies complex medical test reports into human understandable language and with audio features it reads it out loud in local language for the illiterate people. Moreover, its expert verification loop ensures medical accuracy and safety.
To ensure privacy and responsible AI use, MedExplain anonymizes all user inputs, stores no personal identifiers, and locally processes sensitive data when possible. It also includes refusal mechanisms , if a question requires, for example, diagnosis or exceeds its safety scope, the system outputs "I don't know", suggesting a follow-up with a doctor for verified diagnosis. These uncertain queries are then referred to certified medical professionals, and through majority voting, validated answers are iteratively added to the knowledge bank.
It also has offline and SMS service so people living areas where connectivity is the issue can also use this application. So this makes MedExplain tailored to the needs of underserved communities in the global south.
- Convert complex medical reports into easy-to-understand explanations (8th-grade reading level or below).
- Provide text-to-speech output in local languages for users without any formal education.
- Operate via WhatsApp, SMS, or offline caching, ensuring functionality in low-connectivity regions.
- Include a verification loop where uncertain or critical responses are reviewed by certified medical professionals.
Guided by literature insights and real-world constraints, MedExplain is designed with the three key pillars: accessibility, safety, and empathy. Accessibility drives our low-bandwidth, multilingual, and text-to-audio design choices; safety informs our expert verification loop; and empathy shapes our conversational design using culturally appropriate personas. This integration of human and artificial intelligence ensures that every user, regardless of literacy or connectivity, can receive clear, compassionate, and medically verified health guidance.
Product Design Prototype
Below is a simple user-flow diagram illustrating how MedExplain operates across low-connectivity environments.
The prototype will demonstrate the following flow:
- User Input : A rural patient uploads or types a section of their medical report on WhatsApp.
- AI Simplification: The LLM simplifies the text into plain language, removing medical jargon.
- Audio Output: The simplified text is converted into an audio message in the user's preferred local language.
- Verification Loop : High-risk or uncertain queries are flagged for human review before response.
- Offline Mode: Key functionalities (e.g., SMS-based summaries) work without continuous internet connection.
References
- Digital health literacy as a super determinant of health: More than simply the sum of its parts, accessed October 15, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC8861384/
- ASHABot: An LLM-Powered Chatbot to Support the Informational Needs of Community Health Workers, accessed October 15, 2025, https://arxiv.org/abs/2409.10913
- Health Literacy and Adherence to Medical Treatment in Chronic and Acute Illness: A Meta-Analysis - PMC, accessed October 15, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC4912447/
- Exploring the Impact of Artificial Intelligence on Global Health and …, accessed October 15, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC11010755/
- Transforming Patient-Provider Communication: The Role of Artificial …, accessed October 15, 2025, https://premierscience.com/pjs-25-1016/
- Global South-led responsible AI solutions to strengthen health …, accessed October 15, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC12282117/
- AI Chatbots in Digital Mental Health - MDPI, accessed October 15, 2025, https://www.mdpi.com/2227-9709/10/4/82
- Bridging the health literacy gap through AI chatbot design: the impact of gender and doctor cues on chatbot trust and acceptance - Experts@Minnesota, accessed October 15, 2025, https://experts.umn.edu/en/publications/bridging-the-health-literacy-gap-through-ai-chatbot-design-the-im
- Artificial Intelligence Applications to Personalized Dietary Recommendations: A Systematic Review - PMC, accessed October 15, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC12193492/
- Achieving health equity through conversational AI: A roadmap for …, accessed October 15, 2025, https://journals.plos.org/digitalhealth/article?id=10.1371%2Fjournal.pdig.0000492
- Relationship between People's Interest in Medication Adherence, Health Literacy, and Self-Care: An Infodemiological Analysis in the Pre- and Post-COVID-19 Era - MDPI, accessed October 15, 2025, https://www.mdpi.com/2075-4426/13/7/1090