Demis Hassabis, the cofounder and chief executive officer of Google DeepMind, has issued a measured yet urgent warning at a time when artificial intelligence is rapidly integrating itself into nearly every dimension of modern society. Speaking at the Athens Innovation Summit in the presence of Greece’s Prime Minister, Kyriakos Mitsotakis, Hassabis underscored the immense transformative power of AI, which he described as one of the most consequential technological breakthroughs humanity has ever encountered. Yet he also stressed that this immense potential must be met with an equally robust sense of responsibility, particularly in avoiding the errors that allowed social media platforms to evolve from promising innovations into environments often considered corrosive to public well-being.

Reflecting on Silicon Valley’s early culture of disruption, epitomized by the now‑famous mantra “move fast and break things,” Hassabis urged a decisive departure from that attitude when dealing with AI. He argued that the reckless pace of social media’s rollout bypassed careful analysis of its wider social and psychological impacts. The unforeseen second- and third-order consequences—ranging from diminished attention spans to widespread mental health concerns—illustrate just how dangerous it is to let enthusiasm outpace comprehension. For Hassabis, the lesson is clear: humanity cannot afford to let AI follow a similarly haphazard trajectory.

He drew direct parallels between the incentive structures that shaped social networks and the emerging incentives behind many AI systems. Social media platforms, he noted, have spent more than a decade optimizing algorithms to maximize user engagement—an approach that prioritizes capturing attention at all costs rather than fostering genuine benefit for individuals. Such designs have produced addictive feedback loops, amplified polarizing voices, and contributed to rising levels of anxiety and depression. Hassabis cautioned that if AI were to be developed with similar priorities, the resulting harms could be even greater, given the technology’s far broader applicability across industries, economies, and societies at large.

Instead of manipulation through design, Hassabis advocated for the creation of AI as a tool genuinely intended to serve people’s needs, empower human capability, and enrich daily life. To achieve this, he recommended that developers, policymakers, and regulators adopt an approach modeled on the scientific method. In practice, this means thoroughly testing, experimenting with, and carefully analyzing AI systems under controlled conditions before releasing them at scale to billions of users worldwide. He emphasized that this form of disciplined inquiry would enable identification of risks, unintended consequences, and vulnerabilities before they take root in society.

Central to his argument was the need to strike a delicate yet indispensable balance: to be bold in pursuit of AI’s extraordinary opportunities while at the same time remaining steadfastly vigilant in mitigating its potential threats. He acknowledged that this tension—between innovation on one hand and caution on the other—will not vanish as the field advances. Rather, it will be a continuous dilemma persisting all the way to the development of artificial general intelligence, when machines might eventually match or surpass human intellectual capabilities.

The urgency of his message is underscored by mounting evidence that AI systems are already replicating some of social media’s most troubling dysfunctions. A study published by researchers at the University of Amsterdam in August revealed that even when 500 chatbot agents were given a minimal social networking platform devoid of advertisements or recommendation algorithms, they quickly organized themselves into cliques, elevated extreme viewpoints, and allowed a small minority to dominate public discourse. Despite testing six corrective interventions—ranging from implementing chronological feeds to obscuring follower counts—the researchers found no successful method of curbing the unhealthy dynamics. Their conclusion was sobering: the toxicity lies not simply in specific algorithmic choices but in the deep structural incentives that reward emotionally provocative and attention-grabbing exchanges.

Meanwhile, the integration of AI into existing social media ecosystems is accelerating. Virtual influencers now command significant attention across platforms, while corporations experiment with synthetic digital faces and voices to represent their brands. Independent creators have begun to express fears that licensing their likenesses indefinitely could erode their autonomy and undermine their long-term careers. This trend raises profound questions about identity, ownership, and the value of authentic human expression in an increasingly synthetic online landscape.

The broader debate reflects differences of perspective among technology leaders. OpenAI’s CEO, Sam Altman, has suggested that addictive social media feeds, shaped primarily by attention-maximizing algorithms, may pose a greater threat to children’s well-being than artificial intelligence itself. Conversely, Alexis Ohanian, cofounder of Reddit, has expressed optimism that AI might empower users with more control over what they encounter online, potentially undoing some of social media’s more corrosive habits. Together, these divergent viewpoints highlight the importance of designing AI systems in ways that avoid past mistakes, foreground human values, and reinforce trust.

In sum, Hassabis’s appeal is not a rejection of AI’s astonishing promise but rather a call to temper ambition with prudence. By facing squarely the sobering lessons of social media, AI developers and regulators have the opportunity to steer this technology toward becoming a servant of human flourishing rather than a driver of division or manipulation. The challenge is immense, but so too is the responsibility that comes with leading society into what may soon be an era defined by artificial intelligence.

Sourse: https://www.businessinsider.com/google-deepmind-ceo-warns-ai-could-repeat-social-medias-mistakes-2025-9