top of page

AI’s Blindspots (Part 2 of 3): Unequal Outcomes

  • Writer: Dr. Shakira J. Grant
    Dr. Shakira J. Grant
  • May 23
  • 3 min read

Updated: Aug 14

Author: Dr. Shakira J. Grant

May 23, 2025


Key Takeaways:


  • Biased data underlying AI algorithms isn’t just a data science issue—it’s an ethical and human one that reinforces existing inequities.

  • Underserved populations face double the consequences when AI tools are not properly developed, implemented, or evaluated after deployment.

  • Biased data is only part of the problem; systemic barriers and human bias must also be addressed, especially when AI fails to center the needs of marginalized communities.


Artificial intelligence is becoming deeply embedded in how healthcare decisions are made—from diagnosis and treatment recommendations to staffing and resource allocation. In theory, these systems promise speed, precision, and scalability. But in practice, flawed data leads to flawed decisions. And the consequences often fall hardest on those already navigating inequitable care.


In Part 1 of this series, we explored how non-diverse, incomplete, or historically biased datasets form the foundation of problematic AI systems. In this installment, we examine what happens when those systems are deployed in the real world—and how unequal inputs lead to unequal outcomes.


Missed Diagnoses for Marginalized Patients


Research shows that AI tools used in specialties like dermatology, radiology, and cardiology often underperform for patients with darker skin tones or from underrepresented groups. Algorithms trained predominantly on light-skinned individuals are more likely to misclassify skin lesions or miss early signs of disease. The result? Delayed or missed diagnoses compound long-standing health disparities.


Take melanoma as an example. While Black individuals are less likely to develop melanoma than white individuals to develop melanoma overall, they are significantly more likely to develop acral lentiginous melanoma (ALM)- a rare and aggressive subtype. ALM occurs almost 20 times more often in Black patients and typically appears on the palms, soles, or under the nails—areas not commonly screened and often missed by standard diagnostic tools and approaches. Unlike the more common sun-exposed melanomas seen in white patients, ALM is frequently diagnosed at a later stage, contributing to worse outcomes and higher mortality for Black individuals.  


Split-screen image showing a white male patient with a visible melanoma on his forehead labeled "Early Detection," and a Black male patient examining a lesion on his foot labeled "Diagnosis Missed" with a warning icon. The image illustrates racial disparities in AI-driven melanoma detection.
An AI- generated image showing biased AI systems in health care can lead to unequal outcomes in diagnosis. Here, a melanoma is accurately identified in a white patient, while a similar condition in a Black patient goes undetected, highlighting the consequences of non-diverse training data in diagnostic tools.

Inequitable Allocation of Resources


Hospitals are increasingly using AI tools to identify high-risk patients or to allocate additional care resources. But when these tools are built on data reflecting historical underinvestment in marginalized communities—or fail to account for the systemic barriers many people of color face in accessing care—they risk reinforcing inequality.


In short, AI may prioritize patients who have historically received more care, not necessarily those who need it most. This creates a quiet, coded form of exclusion—amplified at scale by automation and data volume.


Broken Trust and Widened Gaps


When patients feel overlooked or harmed by technology that claims to help them, trust erodes quickly. For communities already grappling with systemic bias in medicine, the deployment of AI without transparency or accountability only deepens the divide.


Trust is not a footnote—it’s essential to AI’s safe and successful implementation. Yet nearly 65% of Americans report low trust in AI when used in health care settings. Fewer than 25% believe AI will have a broadly positive impact in medicine or society. In this environment, conversations about equity, ethics, and transparency must be elevated, not stifled by political or institutional pressures.


Conclusion


AI can scale innovation, but it can also scale injustice. Technology is not neutral simply because it is new.


The promise of AI will only be realized when every patient, regardless of race, gender, income, language, geography, or background, can trust that these systems were built not just for speed, but with equity and care in mind.


Next in the series:


Toward Fair AI – How can we build more equitable systems? We’ll explore solutions for auditing algorithms, embedding equity into AI design, and regaining public trust.


Want to stay on top of the latest insights in health care AI? Join our community to receive updates on our blog series and other resources.




Image Credits


Comments


Connect for tailored consulting services.

North Carolina USA

+1 267 474 5291

  • Linkedin
  • Linkedin
  • Whatsapp
  • TikTok

 

© 2025 by CROSS Global Research & Strategy, LLC. Powered and secured by Wix

 

bottom of page