A recently established pro-artificial intelligence super political action committee, financially supported by influential figures such as venture capital firm Andreessen Horowitz and OpenAI president Greg Brockman, has identified its first political adversary: New York State Assembly member Alex Bores and his current campaign for Congress. This PAC, named *Leading the Future*, emerged in August with an ambitious fund exceeding $100 million. Its explicit objective is to champion legislators who embrace a laissez-faire, or even a hands-off, philosophy regarding AI governance. Implicit within this mission is the intent to challenge and potentially unseat public officials who advocate for tighter regulatory frameworks around artificial intelligence technologies. In addition to its principal backers, the organization also boasts support from other prominent technology leaders, including Joe Lonsdale, co-founder of Palantir and managing partner at 8VC, as well as from the AI-driven search company Perplexity.
During a press gathering in Washington, D.C.—specifically at a workshop devoted to the implications and governance of artificial general intelligence—Bores offered a candid acknowledgment of the PAC’s tactics. He remarked that he respected their transparency, noting that when representatives of *Leading the Future* publicly declare their willingness to invest millions of dollars to derail his campaign because of his perceived readiness to impose constraints on large technology corporations and establish minimal AI safeguards, he relays this information directly to the electorate in his district.
Bores, aspiring to represent New York’s 12th Congressional District, has observed a distinct and growing unease among his constituents over the many ways artificial intelligence and digital infrastructure are reshaping daily life. Residents express apprehension about the proliferation of data centers inflating utility costs, exacerbating environmental challenges such as climate change, and the psychological effects that chatbots and machine learning systems may exert on children. Others fear the widespread automation of labor could render entire job categories obsolete, fundamentally transforming economic stability.
As the principal sponsor of New York’s bipartisan *Responsible AI Safety and Evaluation (RAISE) Act*, Bores has been at the forefront of legislative efforts to impose accountability on large-scale AI developers. This measure obliges major AI laboratories to craft and maintain well-defined safety plans aimed at preventing severe harm, to adhere rigorously to those internal protocols, and to report significant safety lapses—such as the theft or misuse of AI models by malicious actors. Furthermore, the bill bars companies from deploying any AI systems that present unreasonable levels of critical risk. Violations of these provisions could incur civil penalties reaching $30 million. The bill currently awaits gubernatorial approval from Kathy Hochul before it can be enacted into law.
In the process of refining and redrafting the legislation, Bores engaged in extensive consultations with leading AI corporations, including OpenAI and Anthropic. These proceedings led to the removal of certain proposed clauses, such as mandatory third-party safety audits, after firm resistance from industry representatives. Nonetheless, the RAISE Act and Bores’s continued insistence on AI oversight have provoked strong opposition within Silicon Valley’s circles of influence.
According to statements provided to *Politico*, *Leading the Future* executives Zac Moffatt and Josh Vlasto announced their intention to orchestrate a multibillion-dollar effort to derail Bores’s congressional ambitions. In correspondence shared with *TechCrunch*, the pair argued that Bores’s initiatives represent ideological attempts to impose heavy-handed controls that would constrain not only New York’s potential but also undermine the broader U.S. capacity to remain a global leader in AI innovation and employment. From their perspective, the RAISE Act exemplifies regulatory zealotry that could impede competition, suppress economic dynamism, leave American users vulnerable to foreign interference, and even erode national security. They warned that fragmented, state-by-state mandates could fracture U.S. policy coherence and inadvertently allow geopolitical rivals, particularly China, to seize the mantle of AI leadership. Instead, they advocated for a unified, well-structured federal regulatory regime that promotes economic growth, fosters job creation, strengthens community resilience, and ensures the ethical protection of users.
The tensions surrounding AI policy extend beyond New York. Many voices within the technology industry are lobbying to prevent individual states from enacting AI-related statutes altogether. Earlier in the year, a proposed addendum that would have preempted state-level AI regulation was discreetly added to the federal budget bill but later withdrawn following public scrutiny. Nevertheless, federal lawmakers such as Senator Ted Cruz have since signaled renewed attempts to revive similar provisions through alternate legislative mechanisms.
Bores expressed apprehension that these efforts, if successful, would further consolidate legislative inertia at a time when the federal government has enacted no substantial nationwide AI policy. He contended that state governments, due to their smaller scale and administrative flexibility, possess the agility to experiment with different approaches—functioning as policy laboratories capable of rapidly developing and testing frameworks that could later inform federal decision-making. In his view, it is unreasonable to prohibit states from taking action when Congress itself has yet to meaningfully address AI’s mounting societal challenges. “If Congress resolves the issue,” he argued, “then it could justifiably instruct the states to refrain from intervening. But in the absence of any comprehensive federal law, asking states to stand down simply defies logic.”
Bores further revealed that he has been collaborating with lawmakers from other states to establish a more harmonized set of AI governance principles that could alleviate the tech sector’s concerns about fragmented or inconsistent regulations. He also underscored the importance of ensuring that U.S. policies complement international efforts, specifically avoiding duplications or conflicts with Europe’s emerging AI Act.
Finally, Bores reiterated that thoughtful governance is not antithetical to innovation. On the contrary, he asserted that clear, well-considered guidelines can stimulate rather than hinder progress. He has, in fact, rejected certain overreaching legislative proposals that might have inadvertently stifled legitimate technological advancement. As Bores succinctly framed it, establishing “rules of the road”—whether literal or metaphorical—constitutes a profoundly pro-innovation stance when implemented effectively. Trustworthy AI, he argued, will ultimately triumph in the marketplace, and an industry that dismisses government’s role in fostering public confidence risks alienating the very communities it seeks to serve. He maintained that the growing demand for accountability reflects a broader societal shift toward insisting on ethical transparency as an integral component of technological leadership.
Sourse: https://techcrunch.com/2025/11/17/a16z-backed-super-pac-is-targeting-alex-bores-sponsor-of-new-yorks-ai-safety-bill-he-says-bring-it-on/