Volha Rahalskaya/iStock / Getty Images Plus via Getty Images
Follow ZDNET:
Add us as a preferred source on Google.

ZDNET’s key takeaways
An increasing number of people are now placing their trust in artificial intelligence to obtain guidance about their health. Yet, while AI can produce impressively thorough explanations, it remains far from infallible and can make significant errors. One experienced physician offers her professional perspective on the promise and pitfalls of depending on such technology.

Health information today is more accessible than at any other point in history. Virtually anyone with an internet connection can search for medical insight, regardless of the author’s expertise or the reliability of the source. This widespread availability has profoundly influenced not only how individuals seek medical guidance but also how they perceive the authority and trustworthiness of healthcare professionals. Paradoxically, this flood of information has coincided with an erosion of confidence in the healthcare establishment itself. According to research from the Annenberg Public Policy Center, public trust in major federal health agencies—the Centers for Disease Control and Prevention, the Food and Drug Administration, and the National Institutes of Health—has dropped by roughly five to seven percentage points over the past year, illustrating a notable shift in public sentiment.

Whether intentionally exploiting this decline or simply responding to consumer demand, the technology sector has found fertile ground in providing digital substitutes for traditional medical consultations. AI systems, which are perpetually available, cost little or nothing, and deliver instantaneous responses, have naturally become an appealing source of health advice. A recent Annenberg survey discovered that 63% of respondents considered AI-generated medical information to be dependable. In parallel, leading AI corporations—such as Google, OpenAI, and Anthropic—are developing sophisticated large language models tailored specifically for healthcare professionals, while reports suggest that Apple may be designing its own medical AI. Wearable technology companies like Oura are also joining the trend, recently launching a women’s health–focused language model trained on clinical data.

For Dr. Alexa Mieses Malchuk, a family physician, the emergence of such technology has transformed the way her patients interact with her—and by extension, the manner in which she practices medicine. AI’s appeal lies in its capacity to articulate detailed, seemingly comprehensive answers to almost any question concerning human health. However, its confidence can mask serious inaccuracies. During a discussion with ZDNET, Dr. Mieses Malchuk explored both the advantages and vulnerabilities of these systems, providing measured advice on how patients should integrate AI into their approach to wellness.

How she uses AI
Contrary to skeptics who dismiss AI in clinical settings, Mieses Malchuk recognizes its practical value in handling tedious but essential tasks. She uses AI-driven tools to help organize administrative workflows—such as prioritizing patient messages, generating preliminary advice before appointments, and assisting with scheduling. The emergence of similar products continues apace: both Amazon and Google recently announced healthcare-oriented software intended to streamline documentation, appointment booking, and medical coding. For many physicians overwhelmed by administrative burdens—often reporting that paperwork consumes more time than direct patient interaction—such technological support offers meaningful relief. As Mieses Malchuk observed, improvements like these represent exciting innovations that simplify the work of primary care doctors, though they do not eliminate the need for human oversight and discernment.

AI as a springboard
When addressing patients, the doctor advocates using AI as a launchpad rather than a definitive answer source. While it can be gratifying to receive an immediate, well-formatted response from a chatbot, she stresses that these systems are not diagnostic instruments. They lack the contextual understanding required to distinguish between benign symptoms and serious conditions. Moreover, most users lack the medical training necessary to detect when the AI’s suggestions diverge from clinical reality. Consequently, people often omit critical details about their situation, prompting the AI to deliver guidance that might be inaccurate or misleading. As Mieses Malchuk succinctly states, the results we obtain are only as sound as the questions we pose.

She emphasizes that access to AI should not be restricted to medical professionals—patients deserve to use these technologies—but it should complement, rather than replace, the partnership they have with their primary care physicians. By working with their doctors to interpret and verify online findings, patients can harness the convenience of AI while avoiding the dangers of misinformation.

Over time, she has noticed that many patients now arrive at her office hesitant to admit they have consulted AI resources, yet more convinced than ever that they already know their diagnosis. This newfound confidence, though understandable, can become problematic. In medicine, absolute certainty rarely exists, and while the democratization of information empowers individuals, it may also produce misplaced assurance. Mieses Malchuk worries that AI systems, such as ChatGPT, might create a false sense of security—encouraging people to believe they can forgo seeing a doctor altogether. Such complacency could delay critical interventions, especially when early detection is crucial.

A recent study published in *Nature* supports these concerns. It revealed that when simulating high-risk emergencies, ChatGPT undertriaged more than half of the cases, instructing users to seek care within 24 to 48 hours rather than immediately visiting an emergency department. The study’s authors cautioned that such misjudgments could undermine safety and that extensive clinical validation is needed before adopting AI triage systems for widespread consumer use.

How AI can help patients
Despite these risks, Dr. Mieses Malchuk acknowledges many constructive ways in which AI can enhance patient well-being. These tools can be remarkably helpful in areas of general wellness—for example, offering dietary guidance for someone newly diagnosed with celiac disease or designing customized meal and exercise plans. For the average individual without a biomedical background, AI can serve as a creative and accessible personal coach, generating detailed suggestions and actionable strategies to improve overall health habits. Nonetheless, she insists that diagnosis and treatment must remain the jurisdiction of licensed professionals.

In her view, the growing mistrust toward the healthcare system represents a troubling development. Physicians dedicate their careers to upholding the principle of “first, do no harm,” yet some alternative resources may inadvertently undermine that trust by giving users a deceptive sense of control. While technology offers remarkable opportunities for empowerment and education, Dr. Mieses Malchuk concludes that it should ultimately function as a bridge connecting patients more effectively with their doctors—not as a barrier that isolates them from essential medical care.

Sourse: https://www.zdnet.com/article/good-bad-ugly-of-ai-healthcare-according-to-a-doctor-who-uses-ai/