top of page

AI Bias in Health Care: Who Gets Left Behind?

  • Writer: Dr. Shakira J. Grant
    Dr. Shakira J. Grant
  • Mar 4
  • 4 min read

Updated: Apr 4

You walk into the emergency room, exhausted and sick, expecting to focus on your health. Instead, you’re told before seeing a doctor that you owe $1,000. An AI-driven health care algorithm has already determined your co-pay—before anyone checks your vitals.


That was the experience of an attendee at a recent conference on AI bias in health care. “Thankfully, I could pay,” they shared. “But what about others who can’t?”


This isn’t just one hospital’s policy—it’s a growing reality in AI-powered health care. Medical AI algorithms are already shaping who gets treatment, how much they pay, and even what level of care they receive. But what happens when these algorithms—designed to optimize efficiency—reinforce the same racial disparities in health care that have long plagued marginalized communities?


A widely cited study, Dissecting Racial Bias in an Algorithm Used to Manage the Health of Populations,[1] found that an AI tool used at a major academic medical center was discriminating against Black patients. The algorithm, which helped allocate health resources to 100,000 patients, prioritized white patients over Black patients with the same level of illness. [1] The issue? The system used health care spending as a proxy for medical need—ignoring the fact that Black patients historically receive less access to medical care due to systemic barriers.[2,3] As a result, they were under-referred for critical health services, worsening existing disparities.


These are not hypothetical concerns. AI bias in health care isn’t just mirroring existing inequities—it’s amplifying them.


AI as a Gatekeeper: Who Gets Access to Health Care?


At the conference, attendees raised urgent concerns:


  • “AI is deciding things based on race and ethnicity—but what do those categories even mean in health care?”


  • “If the datasets used to train AI don’t include people like me, how can I trust the decisions it makes?”


  • "AI could be a good thing, especially to improve diversity in clinical trials—but it may also exclude people from them.”


  • "What about language barriers? How do you ensure people who don’t speak English benefit from these tools?”



These aren’t just questions of fairness—they’re questions of survival. Health care algorithms already influence decisions about who receives screenings, specialized treatments, and preventive care. [4] If these systems rely on biased data, they will continue to overlook the most vulnerable.


AI Bias in Health Care Causes Access Barriers


Even if medical AI systems were perfectly fair—which they aren’t—there’s still the issue of accessibility.


One attendee put it simply: “Having access to AI tools doesn’t mean people will know how to use them. Older adults, for example, struggle with technology. How do we bridge that gap?”


Many patients—especially older adults, non-English speakers, and low-income individuals—already struggle to navigate complex health systems. AI-driven health care solutions often assume a high level of digital literacy. If these tools are not designed for equitable access, they won’t just fail to help—they will actively exclude the very communities they should be supporting.


What Needs to Change—Now


AI in health care has the potential to improve efficiency, increase access, and even reduce health care disparities—if we build it with fairness and accountability in mind. That requires immediate action.


What Must Happen Now:


  • AI transparency must be a priority. Patients and doctors deserve to know how decisions are made and must be able to challenge unfair outcomes.


  • Bias in AI training data must be addressed. AI must be built with diverse, representative datasets to ensure equitable medical care.


  • AI should support, not replace, human judgment. Doctors should never defer entirely to an AI-driven health care system, especially in life-or-death decisions.


  • Accessibility must be prioritized. Language barriers, digital literacy, and fair inclusion in AI-driven clinical trials must be addressed.


  • Health care AI needs regulation. The industry must implement AI ethics in medicine, with oversight to prevent harm before it happens.


The risks of AI-driven health disparities are no longer theoretical—they are happening now. Regulators, hospitals, and technology companies must act before inequities become permanently embedded in the system. Patients, providers, and policymakers must demand transparency, accountability, and fairness in AI-driven health care. Because if we don’t intervene now, the very tools meant to improve care will instead reinforce the injustices that have long plagued our health system.


🚀Subscribe: Blog | CROSS Global Research & Strategy to our weekly insights to stay informed on how AI is reshaping health care, policy, and equity. We break down the latest research, expert opinions, and real-world implications—so you don’t have to.


References

 

  1. Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. Oct 25 2019;366(6464):447-453. doi:10.1126/science.aax2342

  2. Grant SJ, Mills JA, Telfair J, et al. "They don't care to study it": Trust, race, and health care experiences among patient-caregiver dyads with multiple myeloma. Cancer Med. May 2024;13(10):e7297. doi:10.1002/cam4.7297

  3. Artiga S, Hill L, Presiado M.  How Present-Day Health Disparities for Black People Are Linked to Past Policies and Events https://www.kff.org/racial-equity-and-health-policy/issue-brief/how-present-day-health-disparities-for-black-people-are-linked-to-past-policies-and-events/

  4. Alowais, S.A., Alghamdi, S.S., Alsuhebany, N. et al. Revolutionizing healthcare: the role of artificial intelligence in clinical practice. BMC Med Educ 23, 689 (2023). https://doi.org/10.1186/s12909-023-04698-z


 


CROSS Global Research & Strategy
CROSS Global Research & Strategy



Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating

CROSS Global Research & Strategy values your privacy. We do not sell, trade, or share your personal information with third parties for marketing or commercial purposes.  

Accessibility Statement

We are dedicated to making our website accessible to all users, including those with disabilities. If you experience any accessibility issues or have suggestions, please contact us. Thank you for your support in creating an inclusive online space.

© 2025 by CROSS Global Research & Strategy. Powered and secured by Wix 

 CROSS Global Research & Strategy is currently headquartered in the U.S.A

DO NOT SELL OR SHARE MY INFORMATION

bottom of page