top of page

The Digital Cliff: Trust in Action (Part 3 of 3)

  • Aug 22, 2025
  • 3 min read

Updated: Jan 28

The promise of AI in health care is real, but adoption is fragile. Ignoring trust could turn breakthrough innovation into widespread rejection


Author: Dr. Shakira J. Grant

Date: August 21, 2025


Maria’s Story

Maria had always carried a quiet dread about cancer. In her community, even her own mother whispered it, screening was seen as dangerous: “better not to know,” they’d say, fearing that a diagnosis would speed death rather than save life.


So when Maria finally gathered the courage for her own screening, she walked in with years of inherited fear. What she didn’t expect was how her results would be shared, with her doctor pointing to an AI-generated report, offering little explanation, no space for questions, and even less reassurance. The exchange felt rushed, almost mechanical. Instead of the human warmth she needed in that moment, Maria felt the presence of technology hollowing out the encounter.


For her, the promise of AI was not innovation; it was a reminder of how fragile trust in health care can be.



A doctor with a patient
Figure 1. When AI is introduced without clear communication, patients may feel distanced rather than supported in their care.

 

The Trust Gap Is Real

Maria’s experience echoes what research is now making clear. In a recent study, 84% of participants said they feared AI in health care would lead to a loss of empathy. Another 60% of Americans reported they would be uncomfortable with their provider relying on AI in their own care. And this concern isn’t limited to the U.S.: a World Economic Forum study shows that nearly 1 in 3 people in the U.K. do not trust AI in health care.


Numbers like these reveal the scale of the challenge. AI may be designed to support clinicians, but without trust, patients may interpret it as yet another major health care hurdle, one that distances them from the human connection they value most in medicine.

 

Building Trust by Design

If Part 2 of The Digital Cliff revealed the fragility of trust, Part 3 shows what building it in action must look like. Research, including Philips’ 2025 Future Health Index, points us toward practical steps:


  • Clinician involvement: Patients trust AI most when doctors are directly involved in its development and adoption, making physician engagement the strongest driver of confidence. 

  • Transparency: Patients want clear explanations of when AI is being used in their care and how it informs decisions.

  • Privacy protections: Reassurances about how personal data is safeguarded remain non-negotiable.

  • Human touch: Technology must never replace empathy. Patients want to know that their needs, not just efficiency metrics, are driving innovation. AI should never lead to clinicians losing their human touch.


Taken together, these steps underscore a bigger truth: trust cannot be added on later; it must be built in from the beginning.


The Way Forward

Maria’s story is not unique; it reflects a larger crossroads facing health care. AI has the potential to make medicine more precise, accessible, and efficient. But without trust, even the most advanced tools risk becoming barriers rather than breakthroughs.


The lesson is clear: adoption at scale will not come from technical excellence alone. It will come when patients feel seen, when clinicians feel heard, and when innovators share responsibility for building systems designed with, not just for, the people who will use them.


This is the work of building trust by design. It is slow, intentional, and deeply human. Yet it is also the only way to ensure that AI becomes a bridge rather than a cliff.


The future of AI in health care will not be decided in labs or boardrooms alone. It will be decided in exam rooms, waiting rooms, and living rooms, where trust is tested every day. The question is whether we are ready to build together.


Want to stay on top of the latest insights in health care AI? Join our community to receive updates on our blog series and other resources.


Don’t let your AI strategy become a safety liability. At CROSS Global Research & Strategy, we help healthcare organizations and tech developers build Responsible AI that prioritizes patient safety. If you are deploying LLMs in oncology or chronic care, let’s discuss how to engineer the right guardrails. Schedule your AI Strategy session today!


Disclaimer: To protect patient privacy, no real names are used in this article. The case presented is an amalgamation of multiple patient experiences. Any resemblance to actual individuals is purely coincidental.

 

Comments


crossglobalresearch.com

Research Triangle Park,

North Carolina, USA

© 2025 by CROSS Global Research & Strategy Powered and secured by Wix 

bottom of page