Chris Lehane has built a formidable reputation as one of the most skillful crisis managers in modern communications – a professional capable of turning even the most unfavorable headlines into stories that quietly vanish from public attention. Once the trusted spokesperson for former Vice President Al Gore during the politically charged Clinton administration, Lehane later earned distinction as Airbnb’s go-to strategist during a seemingly endless stream of global regulatory showdowns stretching from San Francisco to Brussels. Over the course of his career, he has mastered the subtle art of narrative control – the ability to transform reputational calamity into a coherent, even compelling, message. Yet today, two years into what may well be the most formidable assignment of his career, Lehane finds himself facing a test unlike any other. As OpenAI’s Vice President of Global Policy, he must convince a skeptical world that the company remains genuinely committed to the ideal of democratizing artificial intelligence, even as its behavior increasingly mirrors that of the very technology conglomerates it once proclaimed itself distinct from.
Earlier this week, I shared a twenty-minute onstage conversation with Lehane at the Elevate conference in Toronto. Those fleeting minutes were my sole window to push past the meticulously rehearsed corporate soundbites and probe into the deeper contradictions eroding OpenAI’s carefully crafted public image. It proved to be a challenging mission, one that yielded only partial success. Lehane’s professionalism is unmistakable: he exudes warmth, projects credibility, and communicates with a calm, reasonable tone that disarms criticism before it solidifies. He acknowledges ambiguity rather than deflecting it, and he tries, perhaps genuinely, to appear human in an industry often accused of detachment. He even confessed to losing sleep—waking in the middle of the night, consumed by doubts about whether this technological revolution will truly serve humanity. Yet beneath the empathy and eloquence lies a paradox: good intentions quickly lose their potency when the company they represent is accused of silencing its critics through subpoenas, consuming the scarce water and energy resources of struggling towns, and resurrecting deceased cultural icons to strengthen market share.
At the center of this multifaceted controversy sits the company’s newest flashpoint: the Sora tool. This video-generation platform, introduced only last week, arrived already entangled in legal and moral complications. Its outputs appeared to incorporate copyrighted material at a foundational level—a daring if not reckless choice for a firm already entangled in litigation with the New York Times, the Toronto Star, and numerous publishers across the industry. From a strategic perspective, however, the launch was paradoxically brilliant. By limiting access through invitations, OpenAI manufactured scarcity and desire, catapulting Sora to the summit of the App Store charts. Users enthusiastically produced digital avatars of themselves, as well as altered likenesses of OpenAI’s own CEO Sam Altman, famous fictional figures like Pikachu, Mario, and South Park’s Cartman, and even departed celebrities such as Tupac Shakur. The results were as breathtaking as they were provocative.
When I asked Lehane to explain the rationale behind releasing such a contentious product, he turned to familiar corporate rhetoric. Sora, he maintained, is not a gimmick but a transformative “general purpose technology,” comparable to epoch-making inventions like the printing press or electricity—an instrument designed to lower creative barriers for those previously excluded by lack of skill or resources. He even joked that as a self-proclaimed non-creative, he too could now make videos. Yet beneath this democratizing vision lurked a more troubling reality. In its early phase, OpenAI had offered rights holders the ability to opt out of allowing their content to train Sora. This reversed the conventional logic of copyright law, which typically requires explicit permission before use. When the company observed that audiences gravitated toward recognizable copyrighted imagery, it quietly shifted course, rebranding its policy into an opt-in model. Framed euphemistically as iteration, the maneuver was, in essence, an experiment in limits—an informal test of what the company could appropriate without consequence. Despite rumblings of discontent from the Motion Picture Association and vague legal threats, OpenAI appears to have emerged largely unscathed, at least so far.
These dynamics naturally recall the mounting frustration among publishers and creators who accuse OpenAI of leveraging their intellectual property without equitable compensation. When pressed on whether these excluded stakeholders deserve a share of the economic value generated, Lehane invoked the principle of fair use—an American legal doctrine intended to maintain equilibrium between individual creator rights and the collective pursuit of knowledge. He went as far as to describe this doctrine as the secret engine of America’s technological dominance. His argument was deft, but it carried inherent irony. Only days earlier, I had interviewed Lehane’s former employer, Al Gore, and realized that any reader could have asked ChatGPT to summarize that conversation, bypassing my article entirely. I noted aloud that while the system’s adaptive learning might be “iterative,” it also functions as a replacement—one that threatens the ecosystem of traditional media. For the first time in our exchange, Lehane seemed to drop the well-rehearsed rhetoric. He admitted candidly that society has yet to design new economic structures suited to this transformation. The task, he said, will be difficult but not impossible. It was a rare flash of honesty—an acknowledgment that even the architects of this revolution are improvising in real time.
That uncertainty deepens when the conversation shifts from creative rights to infrastructure. OpenAI’s operations already stretch into the American heartland, with massive data centers in Abilene, Texas, and Lordstown, Ohio, developed in partnership with Oracle and SoftBank. Lehane often compares access to artificial intelligence to the historical advent of electricity, arguing that those late to electrify their societies still struggle to catch up today. Yet this analogy conceals the uncomfortable fact that OpenAI’s vast data clusters draw overpowering quantities of water and electricity from precisely the same distressed communities that lag behind in economic development. When I asked whether the residents of these towns will share in the benefits or simply bear the costs, Lehane responded by citing gigawatts and global competition. OpenAI, he said, consumes approximately one gigawatt of energy per week, contrasting that with China’s addition of hundreds of gigawatts of power and dozens of nuclear facilities in a single year. In his optimistic framing, this international rivalry could catalyze a rebirth of American industry and modernization of its energy grid. His optimism was persuasive, even stirring. And yet, the question remained conspicuously unanswered: will the families of Lordstown and Abilene ultimately face rising electric bills while OpenAI’s servers generate hyperrealistic videos of John F. Kennedy or The Notorious B.I.G.? Video generation, after all, is among the most energy-hungry of all AI pursuits.
This moral tension became visceral when I raised a far more personal example. Just the day before, Zelda Williams had taken to Instagram to implore strangers to stop sending her AI-generated depictions of her late father, Robin Williams. Her words were scathing and heartfelt: to her, these technological experiments reduced human lives into grotesque digital caricatures—“overprocessed hotdogs,” as she put it. When confronted with that deeply human pain, Lehane reverted to bureaucratic language, emphasizing processes, ethical testing mechanisms, and collaborative partnerships with governmental regulators. He concluded with a familiar refrain: there is, as yet, no playbook for these dilemmas. To his credit, he expressed genuine vulnerability at moments, again referencing the insomnia that plagues him as he grapples with the colossal ethical responsibilities inherent in OpenAI’s mission. Whether sincere reflection or stagecraft, these confessions painted the portrait of a man aware of the moral gravity of his work.
I left Toronto with the uneasy impression of having witnessed a masterclass in political messaging. Lehane’s performance—part sincerity, part calculation—demonstrated his gift for walking the razor’s edge between confession and deflection. Yet any admiration for his rhetorical skill was quickly complicated by what happened just days later. Nathan Calvin, a lawyer and AI policy advocate at the nonprofit Encode AI, revealed that while I had been speaking publicly with Lehane, a sheriff’s deputy had appeared at his Washington, D.C. home to serve him a subpoena on behalf of OpenAI. The order demanded access to his private communications with California legislators, university students, and former OpenAI staff. Calvin accused the company of wielding legal intimidation to silence critics, using its ongoing lawsuit with Elon Musk as a pretext to insinuate that his organization was secretly funded by Musk. His version of events painted a troubling portrait of a company turning defensive maneuvering into a weaponized strategy of control.
The irony was bitter. Lehane—long celebrated as the maestro of political strategy—was now being described by a public-interest lawyer as the “master of the political dark arts.” In Washington, such a label might be half admiring. Within a company that professes to build “AI that benefits all of humanity,” it sounds alarmingly like an indictment. Yet the more consequential truth is not about any single scandal or personality, but about the collective turmoil within OpenAI itself. Even insiders appear conflicted about their employer’s evolving identity. In recent days, several current and former employees voiced misgivings online following the release of Sora 2. Among them, researcher Boaz Barak, who also teaches at Harvard, publicly remarked that although the technology is extraordinary, it is far too early for self-congratulations about avoiding the societal harms that plagued earlier digital platforms.
Perhaps the most startling expression of internal doubt came from Josh Achiam, OpenAI’s head of mission alignment. In a series of unusually candid posts, Achiam prefaced his remarks with the acknowledgment that he might be jeopardizing his career, then proceeded to question whether the company risked transforming into a source of fear rather than a beacon of virtue. His statement—that OpenAI bears a duty to humanity so exceptional that any deviation from it constitutes failure—echoed like a conscience emerging from deep within the organization.
An executive publicly wrestling with his company’s moral trajectory represents more than a fleeting controversy; it signals a reckoning. It suggests that even the most accomplished political operators, people like Chris Lehane who have spent lifetimes shaping narratives, may find their talents inadequate to reconcile a widening gap between mission and practice. As OpenAI races toward the frontier of artificial general intelligence, its challenge extends beyond innovation. The true test lies in preserving credibility among its own ranks, convincing both the world and its employees that its commitment to serving humanity remains more than marketing language. In the end, the question is not whether Chris Lehane can refine the message—it is whether that message is still believed by the people who helped create it.
Sourse: https://techcrunch.com/2025/10/10/the-fixers-dilemma-chris-lehane-and-openais-impossible-mission/