Greetings, and a warm welcome back to another edition of Regulator. It has been a notably long and eventful two weeks since our last dispatch graced your inboxes—a brief hiatus that, fortunately for those who follow the ever-evolving power dynamics between Silicon Valley and Washington, did not coincide with a cease-fire between Big Tech and Big Government. In fact, the confrontation has only intensified, with new developments adding layers of intrigue to an already complex struggle. So let us dive directly into the unfolding drama and dissect what has emerged during this brief but consequential interval.

Over the past week, I found myself engrossed in verifying whispered rumors circulating through the policy and technology corridors of Washington: reports suggesting that former President Donald Trump was poised to authorize an executive order advancing one of the AI industry’s most coveted ambitions—nationwide legal preemption. Such a measure would effectively strip individual states of the ability to enact their own artificial intelligence laws, consolidating regulatory power within the federal government. My task involved reaching out to a network of informed contacts, hoping to piece together how exactly the Trump administration intended to navigate this potential move. Which agency, for instance, would assume the lead? What constitutional or statutory rationales would underpin such a sweeping decree? And most critically, how would this maneuver align—or collide—with congressional efforts to embed similar restrictions within the National Defense Authorization Act?

Then came the turning point: I obtained a copy of the draft executive order itself. The document’s sudden appearance hinted that someone within the administration harbors a deep animosity toward David Sacks, the billionaire venture capitalist serving as Trump’s Special Advisor on AI and Crypto. Although Sacks does not hold a permanent government position—his temporary appointment echoes Elon Musk’s earlier limited advisory role—he has nonetheless emerged as one of the most forceful architects shaping the administration’s technology and cryptocurrency policies. Trump’s recent comments endorsing federal preemption over AI regulation bear his unmistakable imprint.

Unlike the turbulence of Trump’s first term, today’s White House has become remarkably disciplined in controlling internal information flow. Gone are the days when staffers incessantly leaked salacious tidbits to favored journalists, each leak an attempt to undermine rivals or curry favor with the president. During that earlier era, career civil servants quietly texted reporters in distress, conservative media figures gleefully recounted late-night chats with Trump himself, and factions within the administration—New York elites, Bannon-inspired populists, and establishment conservatives—waged continuous political warfare both with one another and against the Democrats.

In this second administration, however, leaks are rare and carry symbolic weight. Their rarity reflects the environment of unwavering personal loyalty that now defines Trump’s leadership team. Having gutted significant portions of the bureaucratic apparatus, he surrounded himself with subordinates whose primary qualification is allegiance. Therefore, when a document does escape the iron walls of “Trumpworld,” it signals that someone has calculated the cost of betrayal and decided that undermining a rival was worth the personal risk.

Now, this particular draft order—if ever formalized—might not have explicitly prohibited state-level AI laws. Yet it would have transferred enough discretionary power to the executive branch to deter states from legislating independently. In the analysis below, I consult Charlie Bullock, a senior research fellow at the Institute for Law and AI, who meticulously deconstructed the potential effects of this executive order. He explained how the federal government, under the proposed framework, could punish dissenting states through litigation, financial coercion via withheld grants, or penalties sanctioned through the Federal Trade Commission.

While Bullock acknowledged that the order itself might not withstand judicial review, he emphasized the practical difficulty of resistance. A state reliant on essential broadband funds, for example, might hesitate to challenge federal withholding policies even if eventual court victories were possible. The long delays inherent in litigation could discourage lawmakers from enacting any legislation contrary to the administration’s will.

Politically, the strategy exemplifies Trump’s governing philosophy: act decisively with executive authority first, then address questions of legality later. Past precedent suggests that several similarly expansive orders emerged from this White House without leaks, each embodying a disregard for procedural caution. Yet what distinguished this particular document was its unambiguous empowerment of one figure—the Special Advisor for AI and Crypto, David Sacks—by mandating that nearly every decision-making process consult his office. That centralization of influence seems to have been provocative enough to inspire a defiant insider to break ranks and leak the text.

Before delving into the political and legal ramifications of this potential power grab, a quick glance at parallel developments in technology journalism offers context. Reports such as Justine Calma’s investigation into the environmental trade-offs of chip manufacturing in the Arizona desert, Robert Hart and Thomas Ricker’s examination of generative AI’s capacity for conspiracy creation, and Mia Sato’s chronicle of the music industry’s uneasy truce with AI startups all illuminate the throbbing tension between innovation, ethics, and regulation. Lauren Feiner’s coverage of the FCC’s retreat from cybersecurity safeguards, alongside Justine Calma’s assessment of the deflated UN climate talks, underscores the broader atmosphere of regulatory ambivalence defining this moment.

Returning to the executive order itself, I engaged Bullock in a detailed conversation regarding its implications. He clarified that an executive order, unlike legislation, lacks inherent power to impose a true moratorium on state laws. It cannot autonomously overwrite state sovereignty; instead, it articulates federal policy preferences and directs executive agencies—like the Department of Justice—to pursue certain enforcement strategies. In this instance, the proposed mandate would have required the DOJ to assemble a specialized task force dedicated to suing states whose AI laws contradict the administration’s stance. Though symbolically assertive, such directives ultimately operate within the boundaries of federal authority and cannot expressly nullify state statutes.

Bullock further reasoned that, rather than extinguishing state initiatives outright, the order’s greatest potency lies in its capacity to chill legislative momentum. Section 5, which contemplates withholding federal funding from noncompliant states, represents a strategic use of financial pressure. States desperately dependent on federal broadband subsidies, for instance, may decide that challenging Washington’s policy would prove fiscally reckless, even if constitutional analysis could ultimately vindicate them.

In practice, therefore, the true influence of this order would manifest indirectly: by instilling hesitation, establishing bureaucratic uncertainty, and dissuading state legislators from exercising regulatory initiative. Bullock cautioned that while one could imagine the DOJ’s task force identifying creative legal theories to challenge state AI laws, the arguments outlined in the leaked draft seemed weak and likely to falter in confrontation with determined states like California—jurisdictions both eager and politically incentivized to oppose the Trump administration’s centralized approach to AI governance.

Equally notable was the executive order’s invocation of the Federal Communications Commission. Section 6 instructed the FCC’s chairman, in consultation with Sacks, to initiate proceedings exploring federal disclosure standards for AI systems—potentially superseding inconsistent state requirements. While opening such a proceeding is legally permissible, transforming it into enforceable policy would demand congressional authorization that, at present, the FCC simply lacks. Experts across telecom law domains, according to Bullock, agree that the agency’s statutory authority does not encompass AI regulation.

The Federal Trade Commission’s role in this document similarly drew scrutiny. The order appeared to empower the FTC to treat certain state-level “algorithm discrimination” laws as deceptive commercial practices, effectively asserting that forcing AI models to generate altered or “untruthful” outputs constitutes deception under the FTC Act. This reinterpretation of consumer protection law, Bullock argued, is unprecedented. Never before has the FTC claimed that states themselves engaged in “deceptive acts” merely by regulating algorithmic transparency or bias mitigation.

More formidable still, the order concentrated additional power in the hands of the Secretary of Commerce. Section 4 directed the department, in consultation with other White House offices, to compile a list of “onerous” state laws—those deemed inconsistent with the president’s stated policy agenda. That list would then guide enforcement actions elsewhere in the order: identifying which states might face lawsuits, forfeited broadband grants, or broader restrictions across federal discretionary funding.

Indeed, Section 5(b) revealed perhaps the boldest ambition. Alongside targeting BEAD broadband allocations—amounting to more than forty billion dollars—the order compelled all federal agencies to review every discretionary grant program and determine whether funds could legally be withheld from states with disfavored AI policies. This sweeping directive effectively entangled nearly every area of federal-state cooperation, from transportation subsidies to education grants, in potential political retribution. Bullock noted that while courts have previously rebuffed similar attempts to condition funding on unrelated state policies, such as immigration enforcement, the chilling effect alone could influence state behavior, given the immense financial stakes at play.

Although I may have been away from the newsroom for two weeks, I hardly tuned out from the whirlwind of political theater. Among the headlines that dominated attention were the unprecedented Oval Office meeting between New York City’s mayor-elect Zohran Mamdani and President Trump, and Representative Marjorie Taylor Greene’s surprising announcement of her forthcoming resignation from Congress. These developments, though peripheral to the AI policy saga, offer insight into the volatile intersection of personality-driven politics and institutional power—a theme threaded through every moment of this administration.

As always, one can follow these unfolding narratives—and the authors chronicling them—to ensure future updates land directly in your preferred feed. For policymakers, technologists, and citizens alike, the stakes of this conversation are immense: the future of artificial intelligence regulation may determine not only the trajectory of innovation but also the evolving balance of power between the private sector and the state. Regulator, as ever, remains your guide through that charged frontier.

Sourse: https://www.theverge.com/column/829938/leaked-ai-executive-order-analysis