Decolonizing Global Health Through Inclusive AI Governance–– A Framework for Redistributing Power
- Palak Madan
- Jun 30, 2024
- 6 min read
Updated: Jul 9, 2024
The field of global health has been shaped by centuries of colonialism, patriarchy, racism, capitalism, and other systems of oppression, resulting in deep power imbalances that perpetuate health inequities worldwide. These power imbalances are the root causes of entrenched systems of domination that continue to shape the global health landscape. For example, the concentration of funding and agenda-setting power in wealthy donor countries reflects the ongoing legacy of colonialism.
Decolonizing global health requires a fundamental redistribution of power from dominant Western institutions to the communities most impacted by health disparities.3 This means not only increasing representation within existing structures, but also transforming the values, practices, and power dynamics that underpin the field.
As artificial intelligence (AI) becomes increasingly integrated into health systems worldwide, it has the potential to either reinforce existing power hierarchies or radically democratize knowledge and decision-making. The path we choose will depend on how we govern the development and deployment of AI technologies.This paper argues that developing inclusive AI governance frameworks is a critical component of decolonizing global health in the 21st century.
The Intersecting Systems of Oppression in Global Health
The dominant paradigm in global health has been one of Western expertise and leadership, often at the expense of local knowledge and community agency. This has resulted in a concentration of power among wealthy nations and institutions, while communities in the Global South bear the brunt of health inequities. These systems do not operate in isolation but rather reinforce and compound each other to create complex webs of disadvantage.
Examples of the colonial legacy in global health include:
The field of tropical medicine, which emerged to protect European colonizers from diseases in the territories they occupied.
Medical segregation, forced vaccination campaigns, and unethical experimentation on local populations in many African colonies.
The underrepresentation of researchers from the Global South in leadership positions and prestigious journals.
These systems of oppression intersect and compound each other in ways that create distinct experiences of marginalization and disadvantage. For example, the intersection of racism and patriarchy has meant that women of color face unique barriers to health and well-being, from higher rates of maternal mortality to greater exposure to environmental toxins. The intersection of colonialism and capitalism has meant that the health of Indigenous peoples has been sacrificed for the profits of extractive industries.
Importantly, these intersections are not just additive, but synergistic. They create complex webs of power and disadvantage that cannot be fully understood or addressed through single-axis frameworks. Decolonizing global health requires an intersectional approach that grapples with these interlocking systems of oppression and their impact on health inequities.
As we look to the future, the increasing dependence of humans on AI systems adds another critical dimension to this complex landscape of power and equity. With its ability to process vast amounts of data and identify complex patterns, AI could be a powerful tool for illuminating these intersections and guiding more holistic, equity-centered interventions.
As we become increasingly reliant on AI systems in health, we must critically examine how this dependence may impact human cognition, exacerbate power imbalances, and perpetuate biases. Inclusive, decolonial AI governance frameworks that prioritize community agency, epistemic justice, and algorithmic equity are essential to mitigate these risks and realize the transformative potential of AI for global health equity.
The Risks and Opportunities of AI in Global Health
It is against this backdrop of intersecting oppressions that the AI revolution in health is unfolding. AI is not just a word-generating machine, as most people perceive it when they use large language models like Open AI's ChatGPT. For the lack of a better metaphor, we can look at it as an infrastructure upon which different tools are built. In the health domain, tools like predictive diagnostic algorithms and drug discovery platforms are already being integrated into health systems worldwide.
However, AI is not a neutral technology. It is shaped by the data it is trained on and the values and assumptions of its creators. Without intentional design and governance, AI risks amplifying the biases and power imbalances endemic in global health. We are already seeing examples of this. AI systems for health resource allocation have been found to systematically disadvantage racial minorities, reflecting the biases in their training data. AI-driven predictive tools used in population health have sometimes reproduced racist stereotypes, like associating Black patients with higher pain thresholds.
The development of health AI has largely been driven by the same actors and institutions that hold power in the current global health system. The concentration of AI expertise and infrastructure in elite universities and technology companies in the Global North risks replicating colonial dynamics of knowledge extraction and exploitation.
At the same time, AI also holds immense potential as a tool for redistributing power and promoting health equity. Machine learning can help surface previously invisible patterns of health inequity in large datasets. Natural language processing can enable the analysis of narrative and qualitative data, centering the voices and lived experiences of marginalized communities. With community-led fine-tuning training and grassroots approaches, AI can amplify marginalized voices, address health inequities, and promote equitable decision-making.
Realizing this potential requires a radical shift in how we approach AI governance. It means involving affected communities at every stage, training AI on diverse community-generated data, and establishing mechanisms for community-led accountability. This way, we can proactively grapple with the deep structural inequities that shape the development and deployment of AI. The goal is to democratize access to AI tools and governance processes.
A Framework for Inclusive AI Governance
The proposed framework for inclusive AI governance draws on decolonial, feminist, anti-racist, anti-capitalist, and anti-imperialist theories that center the knowledge and agency of marginalized communities. These theories offer critical insights into the systems of power that shape health inequities and point toward strategies for resistance and transformation:
Epistemic justice and knowledge democracy: Recognizing the expertise of marginalized communities and valuing diverse knowledge systems. This means challenging the dominance of Western technoscientific epistemologies and valuing indigenous, and community-based ways of knowing. This would require community members and health workers from diverse backgrounds to participate in AI development and governance
Participatory design and community ownership: Developing AI tools and governance frameworks through deeply participatory processes that center the voices and needs of affected communities. This means investing in community capacity-building, compensating community expertise, and ensuring that communities have ownership and control over their data and intellectual property.
Intersectional analysis and solidarity: Approaching health equity in AI with an understanding of the interlocking nature of systems of oppression, and building solidarities across movements for social justice. This means considering race, gender, and class, as well as disability, sexuality, age, and other dimensions of difference.
Health justice and social transformation: Grounding AI governance in a broader vision of health equity and social justice, with the ultimate aim of dismantling oppressive structures and redistributing social, economic, and political power. This means prioritizing applications of AI that challenge the status quo and enable community empowerment.
Reflexivity and accountability: Establishing processes for ongoing critical reflection on power dynamics and harms in AI development and governance, with robust mechanisms for justice and accountability to affected communities. This means going beyond one-off "bias audits " to create sustained channels of red-teaming for community oversight and course correction.
Transparent and equitable innovation: Ensuring that the benefits of health AI are accessible to and controlled by the communities that have been historically marginalized by health inequities. This means challenging the monopolization of AI infrastructure and intellectual property by corporations and investing in safe, bias-free, and inclusive models.
Implementing this framework requires action from funders, policymakers, health institutions, and social movements. Funders of health research and innovation must prioritize inclusive AI development and create incentives for power-sharing. Policymakers must create regulatory environments that mandate participatory practices and health equity assessments. Health institutions and practitioners must invest in community partnerships and power-shifting organizational cultures. And social movements and civil society must build coalitions to demand accountability and transformative change.
As AI reshapes the landscape of global health, it presents both risks and opportunities for health equity. On the one hand, AI could exacerbate the dynamics of oppression and exploitation that have long characterized the field, further entrenching health disparities. On the other hand, if governed inclusively and equitably, AI could be a potent tool for dismantling oppressive structures and democratizing health knowledge and power. Decolonizing global health in the age of AI requires a radical commitment to power redistribution and social justice in AI governance.
The framework proposed in this paper, grounded in the expertise and agency of historically marginalized communities, offers a starting point. Implementing this framework will be challenging, requiring a fundamental shift in the culture and incentives of health research and practice. But the stakes could not be higher. In a world where health is profoundly shaped by social and political determinants, AI is not just a technical tool, but a force that can transform unjust distributions of power.
By centering the voices and values of those systematically excluded, we have the opportunity to build an AI-enabled future for health that is deeply equitable and just. This is the charge of our time - to harness the power of this transformative technology to create a world where every person, regardless of their position in intersecting hierarchies of power, can thrive in health and dignity.




