Internal medicine physician and Harvard Medical School professor Adam Rodman openly acknowledges that he sometimes turns to ChatGPT for support whenever he encounters complex or puzzling clinical questions. Rather than concealing his use of artificial intelligence from his patients, Rodman is transparent about it, ensuring that those in his care understand how and when the tool is used. Importantly, he is meticulous about maintaining confidentiality; he deliberately refrains from including any sensitive or personally identifiable medical information. On one occasion, as he recounted, the experience became unexpectedly collaborative—his patient actively participated in the process, personally typing additional details into the chatbot. The exchange evolved into a dynamic, three-way conversation among Rodman, the patient, and ChatGPT, offering a glimpse of how AI might one day function as a mediator within medical consultations.

Rodman is part of a growing group—well over a dozen healthcare professionals—who spoke with Business Insider about the immense challenge of deciding whether and how to integrate artificial intelligence into their daily work. Most of these clinicians have experimented with general-purpose models such as ChatGPT, attracted by their accessibility and flexibility. Yet, many quickly discover that such models are not tailored for medical accuracy; they can occasionally generate plausible but incorrect responses, a phenomenon often described as “hallucination.” Seeking greater reliability, some practitioners instead collaborate with specialized medical technology startups. Data from Rock Health revealed that in the first half of 2025, AI-driven medical startups secured an astonishing 62% of total digital health venture capital funding—amounting to $6.4 billion, up from $6 billion in the same period the previous year.

Despite these remarkable investments, the field remains in its infancy. Many generative AI tools have yet to undergo rigorous testing in real clinical environments. Consequently, hospitals have initiated extensive pilot programs, deploying such systems internally to examine both their ethical implications and their practical usefulness. Amid this cautious experimentation, a sizable contingent of doctors still hesitates to use AI altogether. According to a recent Elsevier Health survey, 48% of physicians have incorporated an AI tool into their routines, nearly doubling from 26% the year before. This shift indicates rapid adoption, yet also highlights that over half of doctors remain skeptical or undecided.

Among those exploring AI, a common application quickly emerges: automating the laborious task of note-taking. Several doctors shared that while their broader opinions on AI’s safety and merit differ, nearly all appreciate its potential to offload repetitive administrative responsibilities. ChatGPT and its rivals now assist physicians with emails, documentation, and summarizing patient interactions—use cases that save valuable time for actual care.

Major technology companies are aggressively carving out their share of the healthcare market. At a Federal Reserve conference, OpenAI CEO Sam Altman boldly claimed that ChatGPT performs diagnostic reasoning better than most physicians globally. Meanwhile, Google and Microsoft have been advancing proprietary medical AI models; Microsoft even reported that its system demonstrated results four times more accurate than human diagnosticians in test case analyses. Nonetheless, the majority of practitioners interviewed by Business Insider continue to rely on standard, non–fine-tuned chatbot versions, often within enterprise systems configured to prevent training on protected data.

At Boston Children’s Hospital, Chief Innovation Officer John Brownstein pioneered a HIPAA-compliant variant of ChatGPT engineered to safeguard confidential health information. Roughly 30% of the hospital’s employees now make use of this system for professional tasks ranging from drafting recommendation letters and analyzing quantitative datasets to searching extensive internal documentation. Brownstein collaborates closely with AI startups but admits that his standards for partnership have become increasingly demanding. With the advent of modern AI-assisted coding platforms, projects once requiring entire teams of software developers can now be accomplished by a few engineers, prompting some hospitals to develop internal solutions rather than relying on outside vendors. Yet Brownstein cautions that the explosion of startups leveraging the AI label often makes it difficult to separate genuine innovation from opportunistic marketing hype. The deluge of entrepreneurial pitches—“dozens every week,” he says—can be relentless.

Consider the experience of David Zhang, a pathologist at Somos Community Care, who demonstrated how imperfect AI output can be. When he submitted a prostate biopsy image to ChatGPT, the model erroneously described it as a case of ductal adenocarcinoma—a malignancy typically associated with the pancreas, not the prostate. Such a misclassification underscored the technology’s current limitations. Zhang wryly noted that errors like these explain why human expertise remains indispensable: “That’s why we have a secure job,” he said, emphasizing that physicians can still refine and improve the performance of foundational AI models.

For practitioners seeking higher precision than general models provide, a multitude of specialized tools are available. The U.S. Food and Drug Administration’s official database lists over 1,200 AI-enabled medical devices, reflecting the sector’s exponential growth. Yet the abundance of digital solutions has also produced new frustrations. Dentist Divian Patel observed that while AI was initially promoted as a means of simplifying workflow, the current landscape instead resembles a maze of disconnected platforms, each requiring unique credentials—a situation he summarized as “a thousand portals, a thousand logins.” To address this problem, Patel co-founded Trust AI alongside fellow dentist Shervin Molayem, aiming to consolidate various AI-driven functions into one streamlined platform. The venture attracted $6 million in seed funding, according to PitchBook.

Rebecca Mishuris, Chief Medical Information Officer at Mass General Brigham, offers a more reserved perspective. She cautions against succumbing to “shiny object syndrome,” her term for being dazzled by new tools that promise solutions to challenges she does not currently face. Similarly, Alan Weiss, Senior Vice President of Clinical Advancement at Banner Health, described the influx of AI offerings from vendors as “almost overwhelming.” At his institution, proposed technologies undergo dual review teams that evaluate both ethical ramifications and clinical validity before adoption.

Many hospitals are also deploying AI for operational tasks such as patient check-ins or processing insurance claims. But not every innovation fulfills its early promise. Rodman, for instance, recalled the excitement in the medical community over automatically generated patient messages—until studies emerged showing that doctors often spent more time revising those AI-written notes than composing them from scratch. What began as an efficiency solution quickly “fizzled out,” as he put it.

Despite differing enthusiasm levels, nearly all physicians interviewed concurred on the transformative potential of one particular application: ambient listening technology. These AI-driven systems record conversations between clinicians and patients and then automatically generate structured summaries, freeing doctors from the cognitive burden of note-taking. According to Mishuris, this capability holds “real transformative power.” Her colleague Carl Dirks of St. Luke’s Hospital in Kansas City views ambient listening as a genuine antidote to clinician burnout; he has observed that for some practitioners, it restores work-life balance and even prolongs their careers by alleviating the intense mental multitasking involved in documentation.

At BJC Healthcare, Chief Health AI Officer Philip Payne echoed that sentiment, describing the technology as a pathway to “restore human-to-human connection.” In his view, the ultimate purpose of medical AI is to minimize the computer’s intrusion, enabling doctors and patients to engage more directly without the constant distraction of typing into an electronic health record.

Legal and ethical considerations still influence deployment. In states mandating two-party consent for audio recordings, physicians are obligated to disclose the use of listening devices, and many hospitals extend such transparency policies systemwide. Surveys indicate that, when informed appropriately, most patients accept the practice: in one study covering 121 pilot programs, roughly three-quarters of respondents expressed comfort with AI-assisted documentation.

Psychiatrist Farhan Hussain attested to how indispensable these tools can become. In prior roles, he relied upon an ambient listening device to record lengthy therapy sessions and automatically produce detailed clinical notes. After transitioning to a telehealth company that does not yet support such technology, he found himself severely missing it. Without AI assistance, he lamented, he spends sessions transcribing instead of engaging: “I didn’t go to med school just to be a scribe.”

Investors are also recognizing the potential of this niche. Companies like Ambience Healthcare recently secured $243 million in Series C funding from major venture firms such as Oak HC/FT and Andreessen Horowitz, while Nabla, another competitor, raised $70 million in June led by HV Capital. Physicians now have no shortage of options—Brownstein uses Abridge, Hussain prefers Nabla, and Weiss is simultaneously testing four different ambient documentation systems.

In certain fields, doctors highlight how AI aligns naturally with existing technological sophistication. Cardiologist Francisco Lopez-Jimenez of the Mayo Clinic remarked that his specialty has long embraced innovation, attributing this to cardiology’s deep roots in physics, data analytics, and computer science. “It’s one of the disciplines that pioneered AI use in medicine,” he explained. Pierre Elias, Medical Director of Artificial Intelligence at New York–Presbyterian Hospital, agreed, noting that cardiology and radiology lead the way in FDA clearances for AI systems, though cardiologists’ regular contact with patients gives them a distinct vantage point on integrating these tools compassionately.

Nevertheless, across medicine as a whole, a divide persists between futurists and skeptics. Elsevier Health’s data suggests that while nearly half of clinicians currently use AI, a significant minority—about a quarter—avoid it entirely, even outside professional settings. Some fear ethical pitfalls or an erosion of human judgment. A recent study illustrated one subtle risk: gastroenterologists who had grown accustomed to AI assistance in detecting adenomas during colonoscopy saw their detection rate decline notably when the tool was withdrawn.

Internal medicine doctor Jonathan Simon of Bayhealth represents the cautious camp. He limits his AI use strictly to note automation, avoiding generative systems like ChatGPT or commercial med-tech applications within patient care. Although intrigued by emerging research, he remains concerned that efficiency gains could tempt the healthcare industry toward prioritizing quantity over quality—pushing doctors to see more patients rather than improving care depth. “Mistakes might be rare,” he warned, “but a rare mistake can destroy someone’s life.” In his view, responsible and consistent implementation of AI must remain medicine’s guiding principle as automation continues to advance.

Sourse: https://www.businessinsider.com/how-doctors-use-ai-chatgpt-including-to-help-avoid-burnout-2025-10