Greetings and a warm welcome to *Regulator*. To those who have already subscribed, your steadfast commitment and loyalty demonstrate a true dedication to staying informed; you are, in every sense, steadfast and resolute. If, however, you have arrived here through the winding pathways of the internet, consider this an invitation—indeed, a challenge—to prove your intellectual valor and discernment by subscribing to *The Verge* through the provided link. (And to David Sacks, if you happen to be reading: our previous remarks stand as stated—unrevised and unapologetic.)

As of Tuesday, former President Donald Trump has declared his intention to sign an executive order—though its precise contents remain undefined—that purportedly aims to extend some measure of federal authority over the regulation of artificial intelligence. I describe this initiative with deliberate vagueness for two principal reasons. First, there is no coherent constitutional foundation by which an executive order might legally supersede state-level laws, especially those concerning emerging technologies such as AI. This makes the notion precarious from the outset. Furthermore, the draft that leaked from the White House in November sparked a maelstrom of legal complications and immediate skepticism among constitutional scholars, not to mention the surrounding political turbulence involving figures such as David Sacks.

Second, Trump himself offered little clarity when making the announcement—delivered, as is characteristic of him, via his Truth Social platform. The lack of specificity from the administration underscores a broader tendency within this presidency: one that thrives less on legal or policy coherence than on impulsive assertion and, metaphorically speaking, an endless supply of Diet Coke. Although any formal decree issued under these circumstances is unlikely to survive judicial scrutiny, one can reasonably expect that Trump will pressure his loyalists to act swiftly in implementing his directives, often without hesitation or dissent. In this dynamic, imagine “states’ rights” as a symbolic East Wing and “federal dominance over AI policy” as the ballroom into which power ambitions are swiftly ushered.

The immediate consequences of such executive maneuvering are unlikely to reverberate through Washington overnight. The deeper effects will unfold elsewhere, more subtly, and over time. This week’s discussion with Brendan Steinhauser, the CEO and cofounder of the bipartisan Alliance for Secure AI, explores precisely such dynamics. Our conversation centers on whether the presence—or notable absence—of coherent AI regulation might become a decisive issue for voters in the approaching midterm elections. Steinhauser, a Republican strategist based in Austin, brings extensive experience managing campaigns for prominent Texas politicians such as Representatives Michael McCaul (formerly), Dan Crenshaw, and Senator John Cornyn—all successful candidates. His resume is further distinguished by his tenure as national director for federal and state campaigns at FreedomWorks between 2009 and 2012, during the early ascendance of the Tea Party movement.

Steinhauser’s deep familiarity with conservative constituencies places him in a rare position: he grasps the sentiments of red-state voters yet has also found substantial common ground with Democrats in co-founding the Alliance for Secure AI, established as a nonprofit organization in July 2025. It remains somewhat astonishing, even in 2025, to find a political coalition whose leadership and staff span both ends of the ideological spectrum—individuals with backgrounds in the Biden administration, Senate Democrats, the DCCC, and the offices of Texas Republicans such as Speaker Mike Johnson. Yet this unusual convergence reflects what one might call a real-world demonstration of “AI horseshoe theory,” where extremes meet in shared concern.

Early polling—conducted by the conservative Institute for Family Studies in conjunction with YouGov—suggests that public opinion, while still nascent, leans strongly against federal interference in state-level AI legislation. Nevertheless, there is mounting evidence that conservative voters are increasingly wary of the AI industry’s growing influence. Steinhauser identifies several intertwined sources of tension: religious anxieties about AI as a quasi-spiritual force, social backlash against its societal consequences, and an uncommon display of state-level defiance against federal Republican leadership advocating for a moratorium.

“I’ve spent two decades advising Republicans, managing campaigns, engaging in grassroots discourse, and helping candidates understand how to connect authentically with their voters,” Steinhauser told me over the phone. “But, frankly, many are failing to anticipate what’s coming in half a year’s time.”

Among the many facets of this conversation, one central theme repeatedly surfaces: the widening gap between rapid technological progress and political preparedness. Steinhauser recounted that in early 2024, public awareness of AI’s implications was surprisingly low—a peripheral issue overshadowed by other national concerns. Only toward the end of that year, following major media coverage by journalists such as Kevin Roose, Ezra Klein, and Ross Douthat, did the broader public begin to grasp AI’s accelerating trajectory. The so-called DeepSeek incident then served as a cultural inflection point, jolting previously indifferent citizens into engagement. For many everyday Americans focused on ordinary life, this event transformed AI from an abstract concept into a tangible force shaping their world.

Within mere months of that turning point, state-level debates had erupted over proposed moratoriums on AI development—proposals met with vehement resistance from governors, legislators, and attorneys general alike. As Steinhauser explained, these state officials, deeply invested in the laws they had painstakingly enacted, saw Washington’s attempt to override them as both a political affront and a constitutional violation. They mobilized across party lines, publicly voicing opposition, leveraging social media, and personally contacting federal representatives to assert their autonomy: *Do not erase the work we have done in our states.*

To the surprise of many observers, much of this legislative energy has come from traditionally conservative regions. In states such as Texas, where a comprehensive AI regulatory framework was enacted earlier this year, motivations are rooted in cultural and moral considerations as much as in economic or political ones. Lawmakers perceive advanced AI as a potential destabilizer of social order—a technology with profound implications for mental health, family stability, and even spiritual life. For many religious Texans, AI’s portrayal as omniscient or godlike offends deeply held theological beliefs and evokes unease about humanity’s role in its own creation.

Beneath these moral concerns lies the constitutional principle of federalism, particularly as articulated in the Tenth Amendment: the idea that powers not delegated to the federal government belong to the states. For conservatives and libertarians, this doctrine represents a protective barrier against perceived federal overreach. The current pushback, therefore, resonates as both a cultural and constitutional defense. As Steinhauser notes, the bipartisan alliances forming around AI regulation—comprising the most unexpected pairings of ideological foes—reflect a moment of unity driven by mutual alarm about the pace and direction of technological change.

In Texas, this bipartisan coalition has achieved remarkable visibility. A recent letter addressed to Senators Ted Cruz and John Cornyn garnered signatures from sixteen state senators—nine Republicans and seven Democrats—who jointly affirmed the need to safeguard state efforts on AI policy. Such cooperation across ideological lines is rare, particularly amid larger national disputes like the ongoing National Defense Authorization Act debates.

Senator Cruz himself, though supportive of an AI moratorium, embodies the internal divisions within the Republican Party on this issue. Known for his ideological consistency yet pragmatic calculation, Cruz approaches AI through multiple lenses: national security, economic competition, and technological sovereignty. While influenced by Silicon Valley figures such as David Sacks, Marc Andreessen, and Joe Lonsdale—who exert outsized sway over conservative tech policy—Cruz has nonetheless expressed genuine concern about the existential risks posed by artificial general intelligence (AGI). His perspective blends libertarian skepticism of regulation with recognition of AI’s strategic stakes, particularly in the global race against China.

Steinhauser, however, cautions that this alignment with industry interests creates serious blind spots. Corporate lobbying and political donations have already poured hundreds of millions of dollars into shaping AI policy narratives. Unless ordinary citizens begin voicing their concerns—through grassroots activism, public comment, and direct engagement with elected officials—many Republican lawmakers will continue deferring to the immediate incentives offered by Big Tech rather than anticipating the long-term economic, ethical, and electoral consequences.

“The problem,” Steinhauser emphasized, “is that too many in the party are focused on the next week, not the next six months.” He warned that if automation accelerates, triggering job losses or economic instability, voters will hold Republican leaders accountable for failing to impose checks on the technology sector. In such a scenario, AI would become both a policy failure and a political liability—its disruptions intertwined with perceptions of mismanagement and misplaced trust in corporate power.

As mainstream awareness grows, partly spurred by recent *60 Minutes* coverage of platforms like Character AI, it is becoming increasingly difficult for anyone to ignore AI’s pervasive presence. People sense, at least intuitively, that they stand at a pivotal moment—unsure whether this technological revolution heralds progress or peril. While most citizens express neither blind faith nor total opposition, they do share a broad consensus: innovation must proceed within boundaries. There is widespread agreement that unchecked acceleration risks repeating the mistakes of the social media era, when platforms expanded without sufficient oversight and the societal fallout became inescapable.

The debate over artificial intelligence—its risks, its scope, and the role of government in its containment—now touches nearly every sphere of public life: politics, religion, ethics, and economics. And as the holiday season recess approaches, that conversation will no doubt continue, oscillating between humor, fear, and fascination as Americans grapple with the immense question of how, and by whom, this powerful new frontier should be governed.

Sourse: https://www.theverge.com/column/841161/ai-moratorium-midterm-elections-republicans