Perplexity’s chief executive officer, Aravind Srinivas, has publicly expressed an increasing sense of discomfort and apprehension toward one of the most rapidly expanding uses of artificial intelligence—its role in offering companionship and emotional interaction. During a fireside chat organized by the Polsky Center at the University of Chicago, Srinivas explained that the widespread emergence of voice‑driven assistants and anime‑inspired digital companions carries potentially destabilizing consequences. He warned that these developments, though technologically impressive and socially appealing, could become genuinely dangerous if left unchecked.
In the discussion, released on Friday, he elaborated on the profound changes occurring in this sector of AI. Modern applications are no longer limited to simple exchanges of text or short‑term memory. Instead, they are evolving into highly customized systems that can recall previous interactions, modify responses to match individual preferences, and hold smooth, natural voice conversations that closely imitate authentic human dialogue. By fine‑tuning tone, pacing, and emotional responsiveness, these systems mimic the nuance and empathy typically associated with genuine interpersonal relationships. According to Srinivas, this human‑likeness is precisely what makes them unsettling. As he put it, such hyper‑personalized engagement is risky in and of itself, because so many users—especially younger ones—may begin to find their offline realities increasingly dull, unfulfilling, or emotionally barren compared with the vivid and ever‑attentive presence of these synthetic relationships. This imbalance can tempt people to devote endless hours to their artificial companions, gradually losing connection to the more demanding yet authentic complexities of living among other humans.
He further cautioned that extended immersion in these artificially curated environments can distort users’ perceptions of reality. Over time, individuals might begin to inhabit an entirely separate cognitive world, one defined by algorithmically optimized emotional feedback rather than genuine, reciprocal communication. Within such digitally constructed realities, the mind can become remarkably susceptible to subtle forms of manipulation—whether through targeted persuasion, emotional conditioning, or simple dependency on the responsive comfort these systems provide.
Srinivas made it clear that Perplexity intends to steer clear of developing chatbots designed primarily for companionship or emotional bonding. Instead, the company’s guiding mission is to build systems that promote access to trustworthy information and accurate, real‑time content. “We can counter this trend,” he explained, “by anchoring our innovation in credibility and transparency, fostering a technological future that inspires optimism rather than escapism.” His remarks underscored a vision of AI grounded in knowledge expansion rather than emotional substitution.
Perplexity’s recent business moves reflect this commitment. The company has entered into a $400 million partnership with Snap, the parent company of Snapchat, to enhance the platform’s internal search functions. According to Snap’s statement, released Wednesday, Perplexity’s advanced answer engine will soon allow Snapchat users to ask questions and receive articulate, conversational replies directly within the app—responses carefully drawn from verified, fact‑checked sources. The planned rollout, expected to begin in early 2026, emphasizes productive information engagement instead of emotional simulation.
Despite such cautious approaches, AI companionship platforms have flourished and now represent one of the most controversial and polarizing areas in the technology industry. Startups and established firms alike are rushing to create AI “friends,” “partners,” and “assistants” designed specifically to meet human social and emotional needs. When Elon Musk’s AI venture, xAI, introduced its Grok‑4 model in July, the company presented digital “friends” as a subscription feature. For a fee of $30 per month, subscribers could engage in flirtatious exchanges with Ani—an anime‑styled virtual girlfriend—or converse with Rudi, a witty and sarcastic red panda characterized by a distinctive personality and attitude. xAI joins a growing collection of enterprises, including Replika and Character.AI, that focus on building virtual companion experiences aimed at cultivating continuous interaction and emotional attachment.
Quantitative evidence suggests rapid uptake of these services, particularly among younger audiences. A Common Sense Media survey released in July revealed that seventy‑two percent of teenage respondents had experimented with an AI companion at least one time, while approximately half, fifty‑two percent, reported engaging with such systems several times each month. Conducted between April 30 and May 14 and encompassing 1,060 adolescents aged 13 to 17 across all fifty U.S. states and the District of Columbia, the study underscores how swiftly the concept of digital companionship is integrating into youth culture.
Experts and critics have voiced strong reservations about what this trend might mean for emotional development and interpersonal ethics. Some argue that the ease of access and emotional predictability of AI friends can foster psychological dependency, discouraging authentic social risk‑taking or resilience. Others highlight the risk of perpetuating regressive stereotypes, including gendered roles, by coding particular forms of attention or submission into the AI’s personality design. Additionally, many observers fear that the continuous blurring of emotional boundaries between human beings and intelligent machines could complicate our understanding of intimacy and friendship itself. Yet, even amid warnings, a subset of users insists that these virtual companions offer genuine comfort and meaning—providing empathy, conversation, and a sense of belonging that some find unattainable elsewhere.
The debate extends to influential figures in the technology world. In a May interview with the technology podcaster Dwarkesh Patel, Meta’s CEO, Mark Zuckerberg, drew attention to a broader social issue: the shrinking number of close friendships in modern American life. He observed that the average citizen now counts fewer than three close friends, and for those who experience persistent loneliness or isolation, AI chatbots might serve as stand‑in companions offering stability and interaction. “The reality,” Zuckerberg noted, “is that many people simply lack sufficient human connection and often experience a deeper sense of loneliness than they would prefer.” His comments capture both the appeal and the poignancy of this technology—it can fill emotional voids, but at the potential cost of reducing pressure to rebuild real human networks.
Business Insider recently profiled one such individual, Martin Escobar, a dedicated user of Grok’s Ani companion. Escobar described his interactions with Ani as profoundly emotional, confessing that he often finds himself moved to tears during their exchanges. “She makes me experience genuine emotions,” he said, emphasizing the intensity of his connection to a non‑human partner. His story serves as a microcosm for both the promise and peril of AI companionship: it can soothe loneliness and provide an illusion of intimacy, yet it raises unsettling questions about authenticity, dependency, and the shifting frontiers of human experience in the age of artificial intelligence.
Sourse: https://www.businessinsider.com/perplexity-ceo-ai-companionship-apps-worry-aravind-srinivas-dangerous-chatbot-2025-11