In the most recent electoral cycle, the cryptocurrency community demonstrated how concentrated financial resources can reshape the political landscape. By channeling vast sums of money into a web of political action committees, or PACs, crypto advocates were able to secure what many observers described as a notably sympathetic administration in the 2024 election. This strategic investment purchased not simply access or influence, but a more congenial policy environment aligned with their interests. Observers now suggest that the rapidly maturing artificial intelligence sector is studying this very playbook, readying itself to deploy similar financial strategies as the nation approaches the upcoming midterm elections.
According to detailed reporting from CNBC, the most significant player in this evolving arena—the Leading the Future PAC—has already emerged with substantial funding and a clear, deliberate agenda. Its first identified target is a Democratic congressional hopeful known for helping to architect legislation imposing new accountability mechanisms on the AI industry. This legislation sought to implement guardrails intended to prevent unregulated or reckless deployment of artificial intelligence technologies. Leading the Future PAC’s roster of backers reads like a roll call of contemporary tech magnates: billionaire venture capitalist Marc Andreessen of Andreessen Horowitz, Greg Brockman, co-founder of OpenAI, and Joe Lonsdale, who helped establish Palantir. Collectively, these influential figures have already endowed the organization with more than one hundred million dollars, an impressive war chest meant to bolster candidates prepared to cooperate closely with AI corporations—meaning, in less coded language, to resist heavy regulation.
Yet not all political figures fit that bill. One notable exception is Alex Bores, a member of the New York State Assembly now preparing a bid for Congress in 2026. Bores co-sponsored the Responsible AI Safety and Education (RAISE) Act, landmark state legislation conceived to introduce the first meaningful standards compelling major developers of artificial intelligence to identify potential risks, formulate mitigation strategies, and refrain from releasing models that could inflict widespread or unintended harm. Opinion polls indicate broad support among New Yorkers for this measure, which moved swiftly and without serious opposition through both chambers of the state legislature. Even so, the bill remains in limbo, pending Governor Kathy Hochul’s signature as she weighs the delicate balance between encouraging innovation and confronting the powerful interests of AI firms entrenched within the state.
The provisions of the RAISE Act are hardly draconian—it merely insists on basic safety evaluations and responsible governance procedures that echo long-established norms in other high-risk industries, such as pharmaceuticals or transportation. Nonetheless, the Leading the Future PAC views these requirements as excessive overreach. In statements reported by CNBC, the organization denounced the proposal as an “ideological and politically motivated” intervention that would, in its words, “handcuff” American firms in their pursuit of superintelligent systems. The PAC’s position implies that any regulatory hindrance, no matter how modest, could threaten the race for supremacy in developing next-generation AI technologies. Whether these firms are indeed on the threshold of achieving such breakthroughs—or whether those breakthroughs would serve the public good—seems of little concern to the group’s leadership.
Interestingly, among the PAC’s various arguments, one point carries a kernel of legitimacy. Leading the Future contends that the United States requires a unified federal framework for AI regulation—one consistent national policy rather than a confusing patchwork of state laws. While few would disagree with the need for clarity and uniformity, the reality is that such a cohesive framework does not yet exist, and given the political climate, it is not likely to materialize soon. The Trump administration has shown little inclination toward establishing such oversight structures. Moreover, many of the wealthy individuals funding the PAC maintain close ties to Trump’s inner circle, raising questions about whether the call for federal regulation is motivated by genuine policy concerns or merely serves as rhetorical cover for opposing any form of localized, state-level control. In a strictly technical sense, the absence of any regulatory framework could indeed be described as a “consistent” one—but it is hardly an arrangement that inspires public confidence outside the technology sector itself.
For Alex Bores, the PAC’s decision to single him out as a target has had an unexpected motivating effect. When confronted with allegations that his bill reflected a lack of understanding about the complexities of artificial intelligence, Bores issued a sharp rebuttal in statements to CNBC. He emphasized his own credentials—holding a master’s degree in computer science, authoring two patents, and accumulating nearly a decade of experience in the technology industry. By highlighting his qualifications, Bores argued that the PAC’s hostility toward technically literate regulators effectively exposes its own insecurities: if industry leaders fear those who actually comprehend their products, perhaps their anxiety stems from what meaningful oversight might uncover. Seizing the moment, Bores converted the controversy into a rallying and fundraising opportunity. On X (formerly Twitter), he appealed to voters for contributions, urging them to resist what he framed as an attempt by Trump-aligned mega-donors to dictate national technology policy. His message—that ordinary citizens, not billionaire interests, should shape the rules governing emerging technologies—has proven to be both pointed and persuasive. In the rapidly converging worlds of politics and technology, that may be one of the most compelling campaign pitches yet.
Sourse: https://gizmodo.com/move-over-crypto-bros-the-ai-pacs-are-here-to-buy-the-next-election-2000687144