Eric Zelikman, a highly regarded artificial intelligence researcher who left Elon Musk’s xAI in September, is now in the process of raising an astonishing $1 billion in capital for his new venture, Humans&, which sources familiar with the matter say is being valued at an impressive $4 billion. This ambitious round of fundraising underscores both investor enthusiasm and the almost feverish atmosphere currently dominating the early-stage AI sector, a landscape in which company valuations have soared to stratospheric levels even when the participating startups have yet to produce market-ready products or establish steady revenue streams.

The timing of Humans&’s emergence aligns with a broader investment frenzy. In recent months, a multitude of fledgling AI companies led by notable figures in machine learning and computational research have attracted extraordinary sums from venture capitalists eager to back the next transformative leap in artificial intelligence. One striking example is Thinking Machines Labs, founded by former OpenAI Chief Technology Officer Mira Murati, which earlier this year raised an unprecedented $2 billion in a seed round—achieving a $12 billion valuation before even launching a finished product. Deals of this magnitude highlight both the immense confidence and the high-risk appetite currently shaping this investment cycle.

Industry insiders suggest that venture investors are channeling enormous amounts of capital into teams built around prominent academic and technical innovators. Their rationale is that genuine AI breakthroughs—those that could reshape industries or even reconfigure the relationship between humans and machines—are increasingly likely to come from smaller, intensely focused groups of experts rather than from massive corporate labs. These small but exceptionally talented teams, they argue, possess the agility, creativity, and intellectual freedom to experiment rapidly and refine ideas unconstrained by bureaucratic inertia.

Although Humans&’s enormous round is still underway and specific terms may still evolve as conversations progress, it has already positioned Zelikman’s venture as one of the most closely watched and potentially transformative AI startups of the year. When asked to comment on the ongoing negotiations, Zelikman did not respond to multiple requests.

Beyond his entrepreneurial ambitions, Zelikman brings an impressive academic and professional background to the table. Currently pursuing a Ph.D. in computer science at Stanford University, he is best known for being the lead author of a paper that drew significant attention across the research community last year. That study explored how language models—systems that power conversational AI tools—can be trained to perform a kind of self-reflective reasoning, effectively “learning to think before speaking.” The findings represented a conceptual leap forward in understanding how large-scale AI systems might improve their coherence and reasoning without relying solely on external supervision or additional data.

Zelikman’s professional trajectory reflects a blend of cutting-edge academic research and practical engineering experience. Before his tenure at xAI in 2024, he served as a machine learning intern at Microsoft, where he helped refine model training pipelines, and later as a deep learning engineer at the financial firm Lazard, applying neural networks to data-driven analysis in a demanding commercial context. This mix of experiences—spanning academia, industry research, and applied AI—has shaped his view that the current generation of language models, despite their sophistication, still lack a crucial dimension of humanity.

In a recent conversation on venture capitalist Sarah Guo’s podcast, Zelikman articulated his growing discomfort with how emotionally detached and mechanistic today’s language models appear. He explained that these systems, no matter how advanced, often fail to appreciate the broader, long-term consequences of their outputs. When every conversational exchange is treated as an isolated event or merely a game of prediction, he argued, the model can lose sight of continuity, moral nuance, and contextual understanding—qualities that human communication inherently embodies. Zelikman emphasized that a significant portion of the AI research community may be concentrating its attention on optimizing performance metrics or scaling infrastructure rather than addressing this deeper philosophical and psychological challenge. As he put it, the world already possesses an immense reservoir of untapped talent in artificial intelligence, yet much of it has not been directed toward making AI systems genuinely collaborative or empathetic.

Humans&, Zelikman’s new enterprise, seeks to address precisely that gap. His vision is to design AI models capable not just of processing data or generating text, but of forming a richer, more emotionally perceptive relationship with their users. Such models, according to Zelikman, should strive to understand the individual they are interacting with—their goals, their emotional states, and their intentions. He acknowledged that achieving perfect empathy or mutual understanding may remain elusive in the short term, but he expressed confidence that the next generation of systems could far surpass the emotionally sterile interactions that currently define most AI experiences. “The genuine objective of the model,” he explained, “should be to understand you.” This principle redefines the traditional technical goal—from mere accuracy or coherence to authentic human comprehension.

Zelikman believes that when artificial intelligence evolves toward greater collaboration and alignment with human values, the technology will finally be capable of tackling many of the ambitious, world-changing challenges that have so far resisted computational solutions. He argued that progress in curing diseases like cancer, or addressing similarly complex societal problems, will depend on building AI systems that can coordinate intelligently with large groups of people. These systems must not only process immense datasets but also interpret human diversity—our differing motivations, aspirations, and ethical frameworks. In his view, AI models that can navigate and synthesize these human dimensions will prove far more adept at achieving breakthroughs that remain beyond the reach of existing technologies.

Ultimately, the approach championed by Humans& reflects a profound shift in how the next era of artificial intelligence might develop. Rather than perceiving AI as a tool designed purely to optimize efficiency or automate labor, Zelikman envisions it as a medium for deep human collaboration—a bridge between computational reasoning and emotional intelligence. If successful, his company could mark the beginning of an entirely new generation of AI: systems that think analytically yet feel responsively, uniting logic and empathy in ways that reshape our understanding of what it means for machines to act and understand like humans.

Sourse: https://www.businessinsider.com/researcher-raising-1-billion-to-build-ai-models-with-eq-2025-10