Doximity has filed a fresh lawsuit against OpenEvidence, intensifying an already fierce legal confrontation between two immensely valuable healthcare technology firms, both of which are racing at breakneck speed to dominate what many are calling the new frontier of medical artificial intelligence: an adaptive, clinical-grade equivalent of ChatGPT designed specifically for physicians.

OpenEvidence, a highly valued healthcare startup with a market valuation of roughly $3.5 billion, entered the fray earlier this year when it brought a suit against Doximity in June. OpenEvidence accused the larger, publicly traded $13 billion telehealth giant of impersonating physicians in order to gain unauthorized access to its proprietary systems, thereby misappropriating confidential trade secrets. Now, responding to those accusations, Doximity has struck back. In its countersuit, filed on Wednesday in Massachusetts federal court, the company charged that OpenEvidence engaged in the deliberate dissemination of misleading and harmful claims intended to damage Doximity’s reputation, destabilize its workforce, and lure away key employees through targeted recruitment.

Complementing this countersuit, Doximity had already moved to dismiss OpenEvidence’s initial complaint at the start of the week, contending that the filing should be viewed not as a serious claim of theft but rather as an aggressive tactic designed to suppress legitimate marketplace competition. This series of exchanges underscores how the conflict has escalated from simple rivalry into a high-stakes legal chess match that could reshape competitive dynamics within the medical AI sector.

Importantly, the dispute did not begin with Doximity. Earlier in February, OpenEvidence lodged a lawsuit against Pathway Medical, a Canada-based clinical decision support startup employing artificial intelligence technologies. That lawsuit accused Pathway of launching what are known as “prompt injection” attacks, whereby an AI system is exposed to malicious instructions intended to override its safety constraints and extract sensitive guiding prompts from the model. According to OpenEvidence, Pathway weaponized this tactic to capture valuable system instructions and subsequently retrain its own competing model. Pathway, for its part, attempted to dismiss the lawsuit months later, in June, only for Doximity to subsequently acquire Pathway in August for approximately $63 million, thereby intertwining the companies’ fates even more closely.

OpenEvidence has not limited its legal campaign to Pathway alone. In June, the startup also pursued litigation against another competitor, Vera Health, asserting similar allegations regarding improper extraction of AI system prompts. As of now, Vera has yet to make any formal response in court. Together, these multiple complaints appear to represent both an aggressive defensive maneuver by OpenEvidence as well as an attempt to establish early legal precedent in what is essentially uncharted terrain—the treatment of AI prompts and system instructions as protected trade secrets under U.S. law. The court decisions in these matters could ultimately define novel boundaries around intellectual property in the burgeoning AI landscape.

Meanwhile, beyond the courtroom, both companies continue to aggressively develop their products as they compete to capture the lucrative and rapidly emerging market for AI-powered physician assistants. In July, OpenEvidence secured an impressive $210 million in Series B funding, boosting its valuation to $3.5 billion with support from such prominent venture capitalists as Google Ventures, Kleiner Perkins, and Sequoia Capital. The company has touted its AI application as the most rapidly adopted tool for physicians in recorded history, recently unveiling a new version geared toward streamlining clinical documentation in real-time during patient encounters.

By contrast, Doximity has leveraged acquisitions and internal development to strengthen its foothold. Following its $63 million acquisition of Pathway in August, the company has been reinforcing the feature set of its physician assistant product, Doximity GPT. This system aims to reduce the time doctors spend on administrative chores and paperwork, while also offering a competitive AI-powered scribing service launched in July, placing it in direct competition with Microsoft’s offerings and Abridge, another heavily funded AI-health startup. Despite earlier volatility in healthcare markets that sent its shares downward in 2022, Doximity’s stock has rebounded more recently, a recovery tied in no small measure to the popularity of its AI tools and growing physician engagement. Adding another twist, Nate Gross, a cofounder and former chief strategy officer at Doximity, departed in June to take up the role of leading OpenAI’s healthcare strategy, underscoring the interconnectedness of this niche AI ecosystem.

At the heart of this bitter legal feud lies a fundamental clash of differing narratives and accusations. OpenEvidence maintains that Doximity directly violated its intellectual property rights by accessing sensitive proprietary code, pointing to alleged use of prompt injection strategies. For instance, one example cited in the June lawsuit claimed that Doximity’s head of AI registered on OpenEvidence’s platform by impersonating a physician. Subsequently, this individual allegedly instructed the system to reveal its rules verbatim, write down hidden initialization codes, and even demonstrate comprehension by echoing them back. Doximity, however, has categorically denied such allegations, labeling them speculative, misleading, and unsupported by verifiable facts. In its September motion to dismiss, the company emphasized that no tangible evidence demonstrated any acquisition of OpenEvidence’s trade secrets.

Doximity, flipping the narrative, accuses OpenEvidence of engaging in acts that stretch beyond simple misinformation and venture into attempts at market sabotage. Its countersuit alleges that OpenEvidence propagated false advertising claims—such as asserting its system was the first AI to achieve a flawless physician licensing exam score and that it never produces incorrect or “hallucinated” responses. According to Doximity, these statements were not only unrealistic but actively misleading, given documented examples where the system provided outdated or erroneous medical guidance. As supporting evidence, Doximity cited user complaints, including a public LinkedIn post by Joanna Strober, CEO of Midi Health, who criticized errors in hormone replacement therapy recommendations produced by OpenEvidence’s platform. Current academic research in machine learning reiterates this criticism, noting that hallucinations are an intrinsic element of probabilistic models and cannot be entirely eliminated.

Beyond disputed clinical claims, Doximity’s complaint goes further in describing targeted recruitment and alleged harassment of its employees by OpenEvidence. The counter-accusations include numerous recruitment emails and text messages sent by OpenEvidence executives, some offering multimillion-dollar compensation packages intended to entice high-value staff members away from Doximity. One particularly unusual email, allegedly authored by OpenEvidence CEO Daniel Nadler and directed toward Doximity’s former general counsel, included language Doximity characterized as bizarre and disparaging, suggesting that the company was incapable of retaining workers and should step aside for the future of the industry.

Finally, both sides have raised serious points related to data handling and regulatory compliance. OpenEvidence claimed Doximity exploited doctors’ identifiers to gain improper access to its platform, while Doximity countered with allegations that OpenEvidence had left sensitive patient records exposed online despite publicly claiming adherence to HIPAA, the United States’ core health privacy regulation. Each side’s accusations point to the increasingly complex questions that arise when cutting-edge AI systems intersect with healthcare’s demanding ethical, regulatory, and professional standards.

Together, the dueling lawsuits underscore not only the extraordinary value ascribed to dominance in the AI-for-doctors space but also the unsettled legal frameworks surrounding intellectual property, fair competition, and data security in this new age. How courts resolve these disputes could establish precedent-setting guidelines with implications extending well beyond the immediate parties, shaping the rules of engagement for the evolution of healthcare AI as a whole.

Sourse: https://www.businessinsider.com/doximity-openevidence-suing-each-other-as-doctor-ai-war-rages-2025-9