According to Fei-Fei Li, widely regarded as the ‘Godmother of AI,’ contemporary public and media discourse surrounding artificial intelligence has become excessively intense and sensationalized. She notes that the ongoing rhetoric tends to exaggerate both the fears and the promises associated with this technology, creating a distorted picture that hinders rational conversation. Speaking recently at Stanford University, Li commented with a touch of irony, describing herself as perhaps the ‘most boring speaker in AI these days’ precisely because she finds herself disheartened by the pervasive exaggeration and emotional polarization characterizing discussions in the field. Her disappointment, she explained, stems from witnessing the extremes that dominate the public imagination: on one side, catastrophic warnings of extinction and apocalyptic visions of machines overthrowing humanity; on the other, overly optimistic dreams of a perfect world driven by limitless abundance, where automation has erased scarcity and ushered in infinite productivity.
Li brings tremendous authority to her observations. As a veteran computer science professor at Stanford and the visionary behind the seminal ImageNet project—a foundational dataset that revolutionized modern computer vision—she has spent decades shaping the trajectory of artificial intelligence research. In addition to her academic work, Li recently co-founded World Labs, an emerging enterprise dedicated to developing advanced AI systems capable of perceiving, generating, and interacting intelligently within complex three-dimensional environments. Her practical experience leading both academic and commercial AI initiatives gives her a uniquely balanced perspective on where the technology truly stands, as opposed to how it is often depicted.
In her Stanford talk, Li cautioned that the prevalence of what she characterizes as ‘extreme rhetoric’ has infiltrated not only the circles of technology enthusiasts but also the broader public conversation. This dynamic, she argued, contributes to widespread misunderstanding and fear, particularly among individuals who are not immersed in the Silicon Valley ecosystem and therefore lack nuanced access to the facts. According to Li, people around the world deserve clear, evidence-based communication that explains what AI can and cannot do, free from overstatement and ungrounded speculation. Yet she expressed disappointment that the quality of public education and dialogue about AI remains far below what she would hope for, leaving many communities vulnerable to misinformation and unrealistic expectations.
Li is one of a growing number of distinguished computer scientists calling for more measured, fact-based conversations about artificial intelligence. Alongside her, other influential voices in the field have issued similar appeals for balance. In July, Andrew Ng, founder of Google Brain and a central figure in the early deep learning revolution, warned that expectations surrounding artificial general intelligence, or AGI, have become wildly disproportionate to scientific reality. AGI refers to the hypothetical stage at which artificial systems would attain cognitive capacities equivalent to those of human beings—abilities that include reasoning, learning new tasks without supervision, and applying knowledge flexibly across contexts. Many executives and researchers in leading AI laboratories are frequently pressed to predict when this breakthrough might occur and what its eventual implications for the human workforce could be. Ng, however, insisted during a Y Combinator talk that the notion of AGI arriving anytime soon has been excessively overhyped. He suggested that, for a long time to come, there will remain a wide spectrum of intellectual, creative, and social activities that humans can carry out effortlessly but that AI systems will still find unreachable.
Similarly, Yann LeCun—the pioneering scientist who formerly served as Meta’s Chief AI Scientist and is renowned as one of the architects of modern deep learning—has echoed this cautious perspective. While he acknowledges that today’s large language models represent astonishing technical achievements, he maintains that their design and functionality do not constitute a viable path toward genuine human-level intelligence. In a widely discussed interview last year, LeCun openly expressed disdain for the term ‘AGI,’ contending that it implies a misleading continuity between current AI systems and the vastly more complex structure of human cognition. For him, these large-scale models are undeniably useful and even transformative within certain constrained applications, but they remain far from replicating the flexible understanding and common-sense reasoning that define true intelligence.
LeCun’s professional trajectory further underscores his commitment to expanding research beyond the limits of current paradigms. A month ago, he announced through a post on LinkedIn that he would be leaving Meta after twelve years to establish a new AI startup, one presumably aimed at exploring fresh architectures and conceptual frameworks for building machine understanding.
Taken together, these statements from leading figures—Li, Ng, and LeCun—underline a growing consensus among respected researchers: that artificial intelligence should be approached neither with blind optimism nor with apocalyptic dread. Instead, meaningful progress depends on cultivating an informed, balanced conversation that recognizes both the immense potential and the tangible constraints of the technology. By steering public dialogue away from sensational extremes and toward thoughtful, evidence-based understanding, these scientists hope to foster responsible innovation that benefits society as a whole.
Sourse: https://www.businessinsider.com/fei-fei-li-disappointed-by-extreme-ai-messaging-doomsday-utopia-2025-12