In both health care and public health — distinct but overlapping fields — generative AI is already reshaping how systems operate. Clinical settings are leveraging generative AI tools to draft clinical notes and messages to patients, while the public health sector is exploring the ways these systems can tailor health messaging to different communities.

Yet across health care and public health, adoption of generative AI tools has often come with limited transparency and oversight, and little to no engagement with patients and communities, particularly those most impacted by structural inequity.

While centering community voices and priorities should be essential in generative AI development, governance, and implementation in any sector or domain, it is particularly critical in health. Health is deeply personal, it can be precarious, and, in the U.S., it continues to be shaped by structural exclusion and harm, rooted in long-standing legacies and sustained by current policies and structures that marginalize racially minoritized, queer, transgender, and disabled people. When AI systems are not designed and implemented with this context in mind, the potential for harm is profound. 

We have already seen this happen with predictive (non-generative) AI models. One widely used algorithm in health care underestimated the need for follow-up care for Black patients, referring Black patients for fewer services than their white counterparts, while overestimating medical need for white patients, leading to more referrals for white patients. This occurred because the model used health care expenditures as a proxy for medical need, without accounting for the fact that, due to structural racism and its attendant barriers to care (lower rates of health insurance, provider bias, etc.), Black patients often receive less care and, as a result, spend less on services.

Similarly, disabled people have experienced harm from AI-enabled risk stratification tools that deprioritized disabled people for Covid-19 treatment, based on assumptions about quality of life and life expectancy. These tools — many trained on incomplete, non-vetted, and biased data — risk reinforcing and even exacerbating existing health inequities. And because they often operate in the background, there is often limited transparency and little to no accountability. 


AI threatens to cement racial bias in clinical algorithms. Could it also chart a path forward?

If community members and advocates were involved in the design of the referral-for-services algorithm, they likely would have raised concerns about and challenged the assumption that lower spending equates to less need. This could have then prompted the developers to use more equity-informed metrics in their assessment of the AI model. They would have also likely recommended continuously monitoring the model’s outcomes for potential unexpected inequities in referral rates across demographic groups. Instead of the model’s exclusive focus on streamlining referrals and reducing costs, a community-informed approach might have prioritized ensuring equitable access to services instead.

Generative AI is beginning to make inroads into public health. For example, with the help of generative AI, the CDC has used social media data to monitor school closures to detect potential emerging outbreaks and forecast overdose trends.

As such generative AI tools become more accessible, public health systems, already chronically underfunded and overstretched, may adopt off-the-shelf generative AI models, resulting in less flexibility to incorporate community input and governance. However, this slower uptake of generative AI in public health provides a unique opportunity to embed community accountability before more of these models are fully scaled.

Decision-making about AI in health must include communities affected by those systems, many of whom have been excluded from any decision-making regarding this technology. We propose that generative AI be conceived and developed from the ground up, not from the top down, as is currently the case.

Some frameworks are already pointing in this direction. Zainab Garba-Sani’s ACCESS AI framework offers a clear example. Designed for health care environments, it emphasizes community engagement, identification of barriers to AI use, and embedding equity throughout the clinical AI development and implementation cycle.

To further advance community-centered governance and decision-making in generative AI in health, we recently launched the Grounded Innovation Lab @ Health Justice, with the aim of maximizing accountability, equity, and transparency.

In practice, communities would determine what qualifies as training data, for example, approving community-based narratives and rejecting sources that are biased, stigmatizing, or obtained without consent. Community governance groups would shape how  generative AI in health should be evaluated, broadening the definition of AI model performance beyond technical metrics to include community priorities like trust and confidence.


UC San Diego Health’s Karandeep Singh on how Trump’s ‘woke AI’ executive order weaponizes DEI

And, importantly, lived experience engaging with these systems would count as feedback data for improving AI model performance over time. To ensure truly meaningful governance, community members would meet regularly with health AI developers and other stakeholders with convenings designed for accessibility including hybrid options. Members would be compensated for their time and expertise, and recruitment would draw on established partnerships with trusted community-based organizations.

The Grounded Innovation Lab’s focus extends beyond health care to include public health, recognizing public health’s resource constraints and population-level functions. This latter point, in particular, makes community governance of AI systems necessary, not optional. We also recognize the severe environmental cost of AI, especially the siting of water- and energy-intensive data centers in racially minoritized and low-income communities, which also include high proportions of people who are disabled. These communities have already been burdened by the deleterious impact of health inequities; they are now bearing the environmental costs of generative AI’s exponential rise. Our perspective is informed by broader critiques of the AI field, including frameworks such as Timnit Gebru’s and Émile P. Torres’ TESCREAL that call attention to AI systems’ current harms rather than their hypothetical future risks.

With these already observed harms, communities and technologists can work together to determine alternative approaches that harness the benefits of generative AI, such as the role of small or domain-specific language models that can run on personal mobile devices, reducing the environmental harms while improving accessibility.

There is a real and immediate need to reduce the harm of these systems while also exploring their potential benefits for advancing health equity. As part of the reconciliation process on the Trump tax bill, the Senate recently rejected a proposed ban on state regulation of AI, highlighting the importance of community-centered approaches in AI design and governance. To avoid deepening existing health inequities, we need investments in community-led models for the design, implementation, and governance of generative AI systems, and AI systems, more generally, in health. Community-centered participatory processes and accountability structures can prevent harm before it happens and help ensure that communities most impacted by health inequities shape the technology that they interact with every day.

Health has the potential to be a proving ground for a more equitable, transparent, accountable, and community-centered AI field, both within the sector and beyond.

Oni Blackstock, M.D., is a former computer scientist, physician, researcher, and public health and health equity leader. She is the founder and executive director of Health Justice, a racial and health equity consulting firm. Akinfe Fatou, M.S.W., is a disability justice advocate, strategist, and founder and CEO of Cre8tive Cadence Consulting, a disability-led social impact consulting firm.