Today, I am engaging in a detailed discussion with Sean Fitzpatrick, the Chief Executive Officer of LexisNexis—a company that constitutes one of the most vital pillars within the entire legal ecosystem. For decades, and even back when I attended law school, LexisNexis essentially functioned as the legal community’s definitive library. It was the place where every aspiring or practicing lawyer turned for comprehensive access to case law, legislative history, statutory interpretation, and judicial precedents. It served as an indispensable repository of knowledge that allowed legal professionals to build sound arguments and represent clients with authority and precision. Indeed, there is hardly a modern attorney who has not interacted with LexisNexis; in fact, it remains such an omnipresent tool that it can rightfully be seen as an infrastructural necessity for the legal field—every bit as essential as email software or a word processor in daily practice.
However, as we stand in 2025, even long-established enterprise organizations that maintain massive, proprietary databases are finding themselves irresistibly drawn to the transformative possibilities of artificial intelligence—and LexisNexis is no exception. This technological gravitational pull is visible right from the outset of my conversation with Fitzpatrick. When asked to summarize what LexisNexis represents today, the first word that came from his mouth was not “law” or “data,” as one might have predicted, but rather “AI.” The company’s newest flagship innovation, an artificial intelligence system named *Protégé*, is designed to transcend the limits of traditional legal research. Its overarching goal is not merely to retrieve existing legal authorities but to assist attorneys in authoring the complex, persuasive documents they submit in court—briefs, motions, and supporting memoranda reflecting precise legal reasoning directly relevant to their arguments.
This development signals a monumental shift because artificial intelligence, to date, has injected as much disorder and unpredictability into the courtroom as it has brought convenience elsewhere. Reports have surfaced with alarming regularity about lawyers being reprimanded for submitting briefs generated through unreliable AI programs that fabricate case citations or reference laws that do not actually exist. In a few notable incidents, courts have even had to withdraw decisions after it was discovered that the presiding judges or clerks had themselves relied on flawed AI-generated information, leading to opinions that cited imaginary plaintiffs or non-existent precedents. Sean Fitzpatrick anticipates that, if this recklessness continues, it may only be a matter of time before an attorney faces disbarment as a result of careless or unverified AI use, an event that could permanently alter the ethics and practice of law.
Against this backdrop, LexisNexis’s central promise regarding Protégé focuses on an attribute as old as the legal system itself—accuracy. The company vows that every output from its AI tools will be firmly anchored in verified, authoritative legal sources, free from fabricated content, and far more dependable than the general-purpose models currently circulating the open internet. Fitzpatrick elaborates that this standard of trustworthiness is sustained through rigorous internal review processes. A surprisingly large number of licensed attorneys, far more than initially projected, have been recruited internally to evaluate, validate, and refine the AI’s work product before it reaches customers. Their specialized oversight acts as a safeguard ensuring the system’s recommendations maintain the highest professional and factual integrity.
But my curiosity extended beyond the technical details: I wanted to understand Fitzpatrick’s philosophical perspective on how tools like Protégé might ultimately reshape the very profession of law. If machine learning systems remove the need for junior associates to conduct exhaustive research or draft motions from scratch, how will these younger lawyers acquire the foundational reasoning skills that once defined their apprenticeship years? Without the opportunity to learn through immersion in the painstaking process of legal work, what becomes of the pathway that traditionally transforms novices into seasoned litigators and partners? The conversation thus veered into profound territory—the future structure of legal education, training, and—perhaps most intriguing—the increasing possibility that both writing and reading in the judicial process might soon be aided, if not mediated, entirely by AI. If AI drafts court submissions and judges rely on analogous technologies to evaluate those submissions, could we be approaching a point where algorithmic exchanges replace human interpretation at the heart of justice itself?
Our discussion deepened further when I pressed Fitzpatrick on the intersection of technology and jurisprudence, particularly regarding how conservative judges have incorporated digital tools into a judicial philosophy known as *originalism*. This school of thought asserts that legal interpretation must remain bound to the intent and meaning of laws as understood at the time of their enactment. In some jurisdictions, judges have already begun allowing automated linguistic-analysis systems to approximate “original meaning,” utilizing corpora of historical language data. Such computational methods, combined with AI’s rapid capacity to assess centuries of text, accelerate a trend that is already reshaping constitutional interpretation—especially in an era when the Supreme Court itself appears both deeply partisan and willing to revisit foundational precedents once thought immovable.
To illustrate these tensions in practice, I asked Fitzpatrick to give a live demonstration of Protégé handling an example of legal research concerning one of the most contested questions in current U.S. politics—birthright citizenship. The scenario is telling: an issue apparently settled for over a century yet now reexamined under the Trump administration’s renewed legal agenda. Fitzpatrick gamely complied, showing how LexisNexis’s transformation from a research platform into a provider of AI-supported legal reasoning could have profound systemic consequences. The demonstration underscored how even a company historically synonymous with *finding* the law is now inching closer toward *interpreting* it algorithmically.
Throughout this expansive conversation, Fitzpatrick emphasized prudence, transparency, and responsibility. He described how LexisNexis grounds every AI output in its vast store of over 160 billion legal documents, reinforcing each result with traceable citations verified by what the company calls a “citator agent.” This subsystem evaluates whether case law remains current, valid, and authoritative—functioning almost like a built-in digital editor that guards against the hallucinations plaguing consumer AI products. Moreover, the system acknowledges the critical ethical boundaries imposed by client confidentiality, embedding privacy and data protection as fundamental priorities. Unlike public AI models trained indiscriminately on the open web, Protégé operates within controlled, secure environments where every trace of client data remains confidential.
Yet behind all this technical rigor lies a more existential question: if AI streamlines so much of the profession’s intellectual labor, what happens to the human element of law—the reasoning, empathy, and interpretive subtlety that give the system its legitimacy? Fitzpatrick’s stance is that AI should serve as an *augmenter* rather than a *replacement* for legal expertise. He envisions AI drafting an initial layer of analysis or phrasing, which the attorney then scrutinizes and perfects. In his view, the technology liberates practitioners from the repetitive drudgery of document preparation, allowing them to engage more deeply with judgment-based, strategic, and ethical decisions—the aspects of law that machines cannot emulate.
As we concluded, the magnitude of LexisNexis’s undertaking became clear: it is no longer just a digital archive of legal thought but a system actively participating in the creation of new legal outputs. The company operates within an intricate global framework—a matrix organization under its parent, RELX—where distinct regional databases for common-law and civil-law systems coexist, sharing core technological infrastructure while respecting local legal taxonomies. Through careful localization and consistency in data structuring, LexisNexis ensures that AI-powered tools serve attorneys in multiple jurisdictions without conflating their legal traditions.
Our exchange returned, ultimately, to a larger truth: the convergence of law and AI raises not just technical but moral questions. Who bears responsibility when an AI-generated argument influences judicial reasoning or when precedent is reconsidered through algorithmic interpretation? Fitzpatrick maintains that LexisNexis’s role is not to shape outcomes but to supply verified information transparently and responsibly. Yet, as these systems grow capable of producing reasoned drafts and persuasive legal analyses, one cannot help but sense the approach of a delicate frontier—where technology aids justice but must never usurp the human judgment on which justice relies.
Sourse: https://www.theverge.com/podcast/807136/lexisnexis-ceo-sean-fitzpatick-ai-lawyer-legal-chatgpt-interview