Across the global technology industry, the world’s most influential and financially powerful corporations are investing staggering sums—amounting to many billions of dollars—into the development of artificial intelligence. The primary focus of this spending spree has converged around a particular category of systems known as large language models (LLMs), the massive neural networks that form the conceptual and technical foundation of platforms such as OpenAI’s ChatGPT, Google’s Gemini, and Meta’s Llama. These systems, designed to generate text and interact in natural language, have become emblematic of the current AI revolution. Yet, according to the man who until quite recently led Meta’s artificial intelligence research division, this intense fixation on language-based models represents a serious strategic misstep.
That man, Yann LeCun—one of the most respected scientists in the field and a central figure in Meta’s AI leadership—offered a sharp critique at a public event in Brooklyn on Sunday evening. In his remarks, LeCun acknowledged that large language models are, without question, remarkable technological achievements. He described them as useful and versatile tools with enormous potential for a wide range of everyday and professional applications, affirming that investment in them is both warranted and inevitable given their broad appeal. Nevertheless, LeCun firmly asserted that these models cannot and will not serve as the foundation for true human-level intelligence. In his words, they are not the pathway toward machines that think or reason as people do. The danger, he warned, lies in the all-consuming attention and resources they attract: their dominance within the AI sector has effectively monopolized research agendas, crowding out alternative, potentially more fruitful approaches. To ignite the next genuine revolution in artificial intelligence, LeCun argued, researchers, investors, and institutions must step back, reassess, and identify what critical scientific components are still missing from the existing LLM paradigm.
This pointed analysis is particularly striking because it directly contradicts the trajectory of LeCun’s own employer. It is also consistent with a long-standing perspective he has voiced over the years. LeCun has persistently criticized the overreliance on massive text-based models, maintaining that true computational intelligence will not emerge from algorithms that simply digest and recombine the vast quantities of linguistic data drawn from the internet. Instead, he believes the next major leap forward will come from what he calls world models—AI systems grounded in perception and visual understanding, designed to learn about the physical and causal structure of the world, rather than merely imitating patterns of human language.
The timing of LeCun’s renewed critique adds another layer of intrigue and significance. For months, speculation has swirled around his professional future at Meta. The tension became especially pronounced last spring when the company launched an aggressive expansion in its LLM initiatives, spending billions to recruit and retain high-profile AI researchers—an effort widely interpreted as a rejection of LeCun’s vision for the field. The rumor mill reached new intensity only days ago, when reports surfaced suggesting that LeCun was preparing to depart Meta altogether in order to establish his own startup dedicated to pursuing his alternative research agenda. Although LeCun carefully avoided addressing those reports directly during his appearance at Pioneer Works—the hybrid art and technology venue in Brooklyn, which on that night was filled with a mingling crowd of nostalgic Gen X technophiles who recalled dial-up internet and younger Gen Z attendees fluent in the culture of TikTok—his comments nevertheless sounded very much like an implicit explanation for an imminent professional break. In the clearest terms, he communicated his conviction that large language models do not represent the future of AI, while Meta’s CEO, Mark Zuckerberg, plainly believes the opposite. Given such a stark philosophical divide, it would indeed seem incongruous for LeCun to remain in his current role much longer.
But beyond the drama of one scientist disagreeing with corporate strategy, LeCun’s remarks highlight a deeper and more universal truth about the nature of technological progress. They remind us that in the rapidly evolving landscape of innovation, ideas that seem unassailable today can be eclipsed or overturned tomorrow. Nowhere is this more evident than in the field of artificial intelligence, where consensus among experts can shift with astounding speed. For years, LeCun stood as one of AI’s intellectual standard-bearers—a pioneering figure whose reputation helped inspire Zuckerberg to bring him into Meta (then Facebook) in 2013. Yet, in the wake of OpenAI’s release of ChatGPT three years ago, the center of gravity in AI research decisively shifted toward LLM-based technologies, prompting an unprecedented surge in investment, infrastructure expansion, and recruitment. This influx of capital and talent has led many observers to wonder whether the industry is experiencing genuine progress or merely fueling an unsustainable AI bubble.
Whether LeCun’s skepticism will ultimately prove justified remains an open question. It is entirely possible, as proponents of LLMs such as Google’s Adam Brown—who shared the stage with LeCun in Brooklyn—argue, that continued refinement of language models could one day produce machine systems approaching human-level cognition. The scientific community is far from unanimous on this issue, and such divergence in expert opinion underscores the immaturity and fluidity of the field. Yet, precisely because the fundamental science of intelligence, artificial or otherwise, remains unsettled, these debates are crucial. They serve as a caution against complacency and premature certainty. After all, if even the brightest and most accomplished minds in AI cannot agree on what it truly means to be intelligent, predicting how the ongoing technological upheaval will ultimately unfold becomes not only uncertain but perhaps impossible.
Sourse: https://www.businessinsider.com/meta-ai-yann-lecun-llm-world-model-intelligence-criticism-2025-11