The Clinical Chasm: Is AI Making Our Doctors Seem Less Human? (Part 1 of 3)
- CROSS Global Research & Strategy,LLC

- Aug 7
- 3 min read
Updated: Aug 9
Author: Dr. Shakira J. Grant
Date: 8/7/2025
In medicine, the promise of artificial intelligence (AI) is vast: faster diagnoses, personalized treatment, and more efficient health care operations. But what if the very tools designed to improve care are driving a wedge between doctors and patients, creating a profound disconnect where it matters most?
This isn't just a hypothetical question. It's a reality that plays out in exam rooms every day.

Imagine a patient, already sitting in the exam room, when their doctor enters. The doctor, in a quick "by the way" manner, says they’ll be using a new AI scribing app on their phone. It will listen to and record the conversation to help with note-taking. For this patient, who is already mistrustful of new technology, this simple announcement is a significant barrier. They may suddenly worry about privacy, question how the recording will be used, and decide not to disclose critical personal information. The doctor, focused on efficiency, has inadvertently created a clinical chasm—a trust barrier that compromises the very care they're trying to deliver.
New research confirms this chasm is real. A recent study in JAMA Network Open presented 1,276 participants with fictitious physician advertisements. The ads were identical, except for one detail: some stated that the doctor used AI. The results were startling. Participants rated physicians who used AI as significantly less competent, trustworthy, and empathetic. Furthermore, they were less likely to book an appointment with that physician.
This finding exposes a dangerous paradox. As we focus on building clinically sound AI, we risk eroding the very foundation of medicine—the patient-physician relationship.
The Digital Cliff
This problem extends beyond the individual doctor. It’s a systemic risk that creates a digital cliff: a point where brilliant, effective AI tools are developed, but skepticism from both patients and clinicians leads to a critical lack of adoption. For a health tech innovator, this is a direct threat to market success. A tool can be technically perfect, but if no one trusts it, it’s useless.

This issue is amplified in marginalized communities, where a long history of medical mistrust rooted in unethical practices and systemic bias already creates a barrier to care. The introduction of AI—with its own concerns around data privacy and transparency—only adds to this skepticism.
Ultimately, the stakes are highest in critical specialties, such as oncology. When a patient faces a life-threatening illness, their relationship with their doctor is paramount. Introducing a tool that compromises this trust could have dire consequences. The digital cliff is not just an abstract concept; it represents a potential failure to deliver on the promise of health care innovation.
The Path Forward: Building Trust by Design
The path toward responsible AI is not just about technology; it's about building trust as a core part of the product lifecycle. For innovators, this means integrating community engagement and patient feedback into the very foundation of tool development, not as an afterthought. Companies that do this will gain a significant competitive edge.
For health care professionals, this means recognizing that you are not just end-users, but co-designers and advocates for your patients. Your voice and your clinical experience are essential in shaping AI tools that enhance, rather than diminish, the human connection in medicine.
The first step is creating clinically sound AI. The next, and arguably more critical, step is to build trust into its design.
In our next post, we will examine the crucial role of education and inclusivity in bridging this gap and fully realizing the potential of AI in health care.
To learn more about how we can establish a bridge of trust in health care AI, be sure to subscribe to our blog. You'll be the first to receive Part 2 and Part 3 of our series, delivered straight to your inbox.
Want to stay on top of the latest insights in health care AI? Join our community to receive updates on our blog series and other resources.


Comments