Blog: Perspectives on AI Ethics in Healthcare
Ethical side of AI has always been a central theme in various movies and books and finally research has started to catch up. Research on AI ethics has grown immensely from just one publication in 1998 to 334 in 2019 and 342 by 2020. The first modern foundations of AI ethics dates back to 1940s and 1950s when American mathematician Norbert Wiener coined the term cybernetics to describe how people, machines and other complex systems interact through constant feedback and communication.
Figure 1. The counts of Google Scholar citations with (“AI” or “artificial intelligence”) and (“ethics” or “ethical”) in the title.
His idea moved us from thinking of technology as something separate to seeing it as a natural, connected part of our whole society. Wiener warned that humanity was already “in a position to construct artificial machines of almost any degree of elaborateness,” a power with “unheard-of importance for good and for evil.” His 1950 book The Human Use of Human Beings took those concerns further, marking the birth of computer ethics as a scholarly field and reminding us that every technical breakthrough comes bundled with moral responsibility.
Nowadays, the discussion on AI ethics revolves around 3 questions.
- Who is at the table? Many voices such as the ones from Global South are still being left out when new AI systems are designed. Incorporating diversity is crucial to ensure that AI systems are authentically aligned with what is good for humanity.
- How do we do research on AI safely and ethically? Past mistakes like the Tuskegee Syphilis Study and failures such as Microsoft’s Tay chatbot remind us to ask for clear consent and strong oversight before we train or release an AI system.
- Could AI ever be a “person”? Some experts wonder if very advanced AI might one day need rights or special rules.
Discussing AI ethics in healthcare is even more needed since poorly implemented AI systems can cause more harm than good such as widening the health gaps, breaking patient privacy, or carrying hidden bias into every clinic visit. Organizations such as World Health Organization (WHO), the EU’s High-Level Expert Group on Artificial Intelligence (AI HLEG), and the U.S. National Security Commission on AI have each released guidelines highlighting the importance of patient consent, strong data security, fairness, and clear lines of accountability. Alternative perspectives have emerged as well such as Ubuntu Ethics in AI for Healthcare which is a community-centred African moral framework built on the idea of “I am because we are,” motivating designers to create AI tools that strengthen solidarity, reciprocity and shared well-being so every patient community benefits equally.
Adding Ubuntu to existing frameworks would shift the big questions from “Is the model fair for each isolated user?” to “Does this system strengthen trust, dignity and well-being for the whole community?” In short, it fills the cultural gap left by today’s guidelines and helps AI deliver healthcare that feels not just accurate, but truly human.
Looking at developing AI guidelines for healthcare through an Ubuntu lens can mean changing how we build and run every health tool. Developers would first collect and label data with local clinics and patient groups especially encompassing marginal communities so the finished model serves everyone, not just well-funded hospitals. Consent becomes a living, community-wide conversation, where people help decide how their data is stored, shared, and protected, and what should happen if things go wrong. Trust grows from long partnerships: teams publish the model’s limits in plain language, invite independent audits, and keep a “community board” that can pause or change the system whenever new risks appear. Above all, design goals can protect each person’s dignity and the right to the best possible care, making sure speed and accuracy never outweigh respect and empathy.
–Written by Ful Belin Korukoglu–
REFERENCES
1- J. Borenstein, F. S. Grodzinsky, A. Howard, K. W. Miller and M. J. Wolf, "AI Ethics: A Long History and a Recent Burst of Attention" in Computer, vol. 54, no. 01, pp. 96-102, Jan. 2021, doi: https://doi.org/10.1109/MC.2020.3034950.
2- Amugongo, L. M. (2023, February). Ubuntu ethics in AI for healthcare: Enabling equitable care (IEAI Research Brief). Institute for Ethics in Artificial Intelligence, Technical University of Munich. Retrieved from https://www.ieai.sot.tum.de/wp-content/uploads/2023/02/ResearchBrief_Feb2023_Ubuntu-FINAL-V2.pdf