top of page

AI’s Blindspots (Part 1 of 3): The Data Dilemma

  • Writer: Dr. Shakira J. Grant
    Dr. Shakira J. Grant
  • 23 hours ago
  • 3 min read

Author: Dr. Shakira J. Grant

May 13, 2025


Key Takeaways


  • AI is only as fair and accurate as the data it learns from.

  • Non-diverse, incomplete, or historically biased health data leads to real-world harm.

  • Biotech, pharmaceutical, and health care leaders must prioritize inclusive, representative data to create ethical and inclusive AI systems.


Health care providers reviewing a patient's medical record, while another health care professional cares for the patient in the background.

Artificial intelligence has quietly been shaping medicine for decades, and its potential is more promising than ever. In the 1970s, early systems like INTERNIST-1 helped doctors make diagnoses using rule-based algorithms—primitive compared to today's standards, but revolutionary for their time.


Fast forward to 2024: the U.S. Food and Drug Administration approved 169 new AI/ML medical devices, bringing the total to over 1,000 since 2001. Most of these have come in just the last few years. This explosion of AI use in health care spans from patient-facing tools that detect conditions like skin cancer to administrative assistants like ambient scribe technologies and back-end systems that improve operational workflows, such as claims processing. In short, AI is becoming deeply embedded across the health care continuum.


But there’s a problem hiding in plain sight, and we must address it.


While the AI boom promises faster diagnoses, greater efficiency, and cost savings, it also brings new risks—none more fundamental than the bias baked into the data these systems rely on. That bias isn’t just theoretical. It translates into real harm, especially for patients who’ve long been marginalized or underrepresented in health care systems.


Quotation graphic: AI doesn't correct for history, it repeats it faster and at scale.


  1. Lack of Diversity

    AI systems learn from the data they're fed. If a diagnostic algorithm is trained mostly on lighter skin tones, for instance, it may fail to detect skin cancers and other dermatological conditions in patients with darker skin. Research shows that this lack of representation leads to higher misdiagnosis rates for people of color, who already face significant systemic barriers to health care access.


  2. Historical Bias

    Health care data often reflects long-standing inequities. For example, if past treatment records show women’s pain being underestimated or Black patients receiving fewer diagnostic tests, or lower health care utilization, those biases get encoded into algorithms. AI doesn’t correct for history—it repeats it, faster and at scale.


  3. Incomplete Data

    In many regions, primarily rural, low-income, or other underserved communities, health care data may be sparse or inconsistently collected. This means specific populations may not be represented at all in AI training sets. The result? A tool that works brilliantly in urban hospitals but fails in smaller, less-resourced clinics.


The Human Impact


The consequences of biased data aren’t just theoretical—they show up in exam rooms and emergency departments. Misdiagnoses, inappropriate treatments, and erosion of patient trust are all potential outcomes when AI misses the mark. For health care providers and life sciences companies building these tools, the reputational and regulatory risks are growing. Opening the door to substantial legal actions when AI gets it wrong.


Three Steps Forward


To build ethical, equitable AI, health care leaders must:

  • Audit their data sources for diversity, accuracy, and completeness.

  • Invest in collecting data from underrepresented populations.

  • Engage communities and clinicians to understand gaps and build trust.


The data dilemma is not a technology issue—it’s a human one. And it’s only by confronting it that AI in health care can reach its full, fair potential.


Next in the series (May 20): Unequal Outcomes – We’ll explore how biased algorithms are shaping real-world health care decisions, and what’s at stake when tech goes unchecked.

Like what you're reading?

Don't forget to like this post and comment below.

Don't miss a post. Get weekly insights on health equity, policy, and innovation—delivered straight to your inbox.

 

 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating

CROSS Global Research & Strategy values your privacy. We do not sell, trade, or share your personal information with third parties for marketing or commercial purposes.  

Accessibility Statement

We are dedicated to making our website accessible to all users, including those with disabilities. If you experience any accessibility issues or have suggestions, please contact us. Thank you for your support in creating an inclusive online space.

© 2025 by CROSS Global Research & Strategy. Powered and secured by Wix 

 CROSS Global Research & Strategy is currently headquartered in the U.S.A

DO NOT SELL OR SHARE MY INFORMATION

bottom of page