Greetings and a warm welcome to the final 2025 edition of *Regulator*. As this year comes to a close, it feels particularly fitting to pause and reflect on what has unfolded across the political and technological landscape. For those who have yet to join the vibrant community of *The Verge* subscribers, there’s still time to redeem yourself before the “naughty list” of 2026 catches up—by subscribing through the provided link. And to those who are already loyal readers and supporters of *The Verge*, your ongoing trust and curiosity are genuinely appreciated; that kind of engagement continues to make thoughtful journalism possible.
Just last week, I was invited to appear on *The Brian Lehrer Show* on WNYC, where I discussed my recent reporting surrounding President Donald Trump’s efforts to prohibit individual states from drafting their own legislation on artificial intelligence. These attempts, which could effectively centralize control over one of the most consequential technologies of our time, deserve serious scrutiny. Opportunities such as this radio appearance are rare for me, and I find them uniquely rewarding. Unlike cable news segments, which often restrict a discussion to a fleeting ninety seconds—barely enough time to articulate a coherent thought—the radio format fosters a slower, more reflective exchange. Podcasts, while looser and more conversational, can easily become spaces tailored to insiders, comfortably speaking in shorthand. But radio, particularly live radio, holds a distinctly democratic quality: everyday listeners call in to question, to challenge, and to share firsthand how complex political and technological decisions ripple through their daily lives. It is a humbling reminder that beyond Washington’s insular political conversations exist real people reacting to and living with the policies we discuss.
During that broadcast, a woman from outside the Beltway called to ask an insightful question that struck right at the intersection of technology and law. She wanted to know whether Congress had begun to consider legislation that addresses so-called “digital twins”—a form of generative AI designed to imitate human behavior and now increasingly used by corporations to manage customer interactions. More broadly, she wondered about “agentic AI,” systems capable of executing tasks autonomously, often at costs far lower than employing human workers. Her question pushed me to mentally survey the policy developments I’d come across: Was there any state or federal statute that explicitly governed the use of digital twins? The answer, unfortunately, was no. The closest precedent might be Colorado’s anti-bias initiatives targeting AI in employment decision-making, but even those laws stop short of covering the post-hiring uses of such technologies. The exchange vividly highlighted a gap between technological innovation and the government’s ability to regulate it.
Over the past year, my reporting has repeatedly returned to the spectacle of Washington’s political theatrics as refracted through the lens of the technology industry. I have documented how corporations have sought to circumvent lobbying restrictions under the pretense of “donations” to Trump-aligned nonprofits, how far-right online personalities have maneuvered their way into White House circles to nudge policy levers, and how figures like Elon Musk have found themselves entangled in the melodramatic intrigues of Trumpworld’s internal power struggles. Beneath all of this high drama lies a consistent theme: the ongoing political repositioning of artificial intelligence. Rather than merely lobbying or campaigning, many major tech players have pursued a far more audacious goal—shaping, and in some cases dismantling, the very regulatory frameworks that could limit their power.
Traditional lobbying, of course, has long been a staple of American political life. Massive financial contributions, the establishment of AI-focused super PACs, and targeted ad campaigns have become familiar tools in this eternal game. Yet what we are witnessing now represents something different: an effort not simply to influence the rules but to rewrite—or entirely erase—them. Tech companies have pressed Congress to preempt states from crafting independent AI regulations, offering no substitute federal safeguards in return. When that legislative strategy faltered, these same interests successfully lobbied the president to issue an executive order penalizing states that attempted to enforce their own AI-related laws. Elsewhere, they have made moves toward placing the Library of Congress—the institution central to copyright enforcement—under their sway, aiming to reshape the contours of intellectual property protection in the age of generative content. Some advocates have even floated convoluted legal theories suggesting that the Federal Communications Commission’s authority over telecommunications could somehow be stretched to bring AI under federal control. The justification offered for all this is competitive necessity: to match China’s acceleration in AI development, the United States must loosen its own constraints. Yet conspicuously absent from these discussions are any serious proposals addressing the tangible human costs of this rapid technological shift.
Across the political spectrum, polls reveal a shared sense of unease regarding artificial intelligence. Workers in multiple sectors are watching their jobs vanish as automation in the form of AI systems becomes not a prediction but a lived reality. Reports continue to accumulate detailing the emotional and psychological effects of generative AI tools, especially among younger users who interact daily with systems capable of mimicking empathy while simultaneously manipulating attention and behavior. Furthermore, the physical infrastructure that sustains this digital expansion—vast networks of energy-hungry data centers—carries staggering environmental consequences. On the geopolitical front, hostile actors have already begun weaponizing AI tools for surveillance, propaganda, and cyber operations. And looming beyond these immediate concerns stands the darker “doomer” hypothesis: that advanced AI, if left unchecked, may ultimately pose existential risks to humankind itself.
When I joined *The Verge* in February—barely a month after President Trump’s inauguration, during a period when Musk and other prominent technologists were exerting palpable influence over the federal machinery—I articulated a guiding principle for our political coverage: technological transformation inevitably alters human behavior, and in turn, transformed behavior reshapes governance and power. Initially, I imagined that the renewed populist energy propelling Trump back into office would translate into a backlash against large technology firms. At that moment, it seemed plausible that a populist president might challenge the influence of Silicon Valley, perhaps even seeking to curb its reach. Yet as the months passed, the dynamic inverted. The very electorate once skeptical of elite technological power now finds its daily existence increasingly molded by artificial intelligence—an abstract, invisible force modifying economies, communication, and identity itself. Meanwhile, the administration appears content, even eager, to assist the same billionaire innovators in entrenching that influence further.
In this week’s collection of stories, several pieces capture these tensions from different vantage points. In “Feeding the Machine,” Josh Dzieza and Hayden Field examine how frontier laboratories such as OpenAI and Anthropic, locked in the competition to achieve so-called artificial general intelligence, are consuming unprecedented amounts of data—a process that has birthed an entire secondary industry of obscure data vendors reaping enormous profits. Over at *Decoder*, CEO Prashanth Chandrasekar reflects on how generative AI transformed Stack Overflow’s ecosystem, framing it as nothing less than an existential turning point for the platform. Lauren Feiner’s investigation into the arcane inner workings of the FCC reveals that, despite the release of a thousand pages of internal records related to DOGE, much remains opaque as Brendan Carr prepares to testify before Congress. Meanwhile, Justine Calma’s report on the solar sector illustrates how renewable energy companies are scrambling to outpace political hostility and the expiration of key tax incentives. Elsewhere, Hayden Field chronicles parents urging New York’s governor to sign what they call a minimalist yet essential AI safety bill—legislation they hope will establish a precedent nationwide. And finally, Elissa Weille highlights a distinctly physical problem within the virtual revolution: the sheer weight of AI hardware, as traditional data centers literally buckle under the load of dense racks of GPUs, fueling a nationwide boom in new infrastructure construction.
As we head into the holiday season, *Regulator* will pause publication for two weeks, returning on January 6th with renewed focus and, no doubt, an abundance of developments to unpack. Until then, and in the spirit of seasonal levity, we leave you with screenwriter Steven de Souza’s definitive statement on *The Discourse* surrounding *Die Hard*—a tongue-in-cheek cultural debate that seems to resurface with admirable regularity. And in keeping with Merriam-Webster’s recently crowned “Word of the Year,” allow me to wish all our readers a very merry *slop-mas* and an equally *happy slop year*. For those eager to stay connected, remember to follow the relevant topics and authors from this story on your personalized feed and through email updates.
—Tina Nguyen
*Categories: AI | Column | Policy | Politics | Regulator | Tech*
Sourse: https://www.theverge.com/column/845955/donald-trump-big-tech-2025