Geoffrey Hinton, often celebrated as one of the foremost pioneers in the evolution of artificial intelligence, has earned a reputation across decades of groundbreaking work as the so‑called “Godfather of AI.” Yet, despite his profound influence on the creation of the very technologies shaping our present and future, Hinton recently revealed a personal anecdote that illustrates just how unpredictably these tools have begun to intertwine with human lives and relationships. In a candid and wide‑ranging interview with the *Financial Times*, he recounted a moment both humorous and slightly surreal: his former partner once relied on an artificial intelligence chatbot to formally end their relationship with him.
According to Hinton’s retelling, his girlfriend, now an ex, requested assistance from the chatbot by asking it to articulate why he had, in her words, behaved like “a rat.” Rather than speak directly herself, she presented him with the AI‑generated critique, essentially outsourcing the emotional clarity of her breakup message to a machine. Hinton explained that she simply handed over the chatbot’s response, which sharply outlined the flaws in his behavior as perceived by her. Reflecting on the experience with a mix of lighthearted amusement and intellectual detachment, he remarked that he did not personally feel devastated. From his point of view, he had simply grown interested in someone else, a situation he described with understated frankness: “I met somebody I liked more. You know how it goes.” His casual summary suggested that, while the method of delivery was unconventional, he was not particularly wounded by this AI‑mediated critique.
Beneath the humorous surface of this story lies a broader and more serious observation: artificial intelligence has begun to transcend its earlier roles as a research tool or productivity enhancer and is now subtly integrating into the very fabric of daily interpersonal communication. Increasingly, people are turning to systems like chatbots not just for technical tasks such as composing emails or troubleshooting logistical problems in the household, but also for highly intimate matters such as expressing feelings, making delicate decisions, or—most strikingly—delivering emotionally charged messages. In Hinton’s case, what might have once been an uncomfortable but quintessentially human conversation was partly delegated to an algorithm, underscoring how radically technology is reshaping the modes through which we connect with one another.
This trend has drawn close scrutiny from researchers who are grappling with its psychological and social consequences. For instance, a joint study released in March by scholars at OpenAI and the MIT Media Lab examined millions of exchanges conducted through ChatGPT, including an enormous dataset of text‑based conversations and thousands of recorded voice interactions. Their goal was to assess how frequent engagement with conversational AI impacts users over time, particularly with regard to emotional well‑being and patterns of reliance. Strikingly, the researchers concluded that a small subset of “power users”—those who depend on the tool most heavily—were at an elevated risk of exacerbating feelings of loneliness. Rather than alleviating isolation, as some might hope, these virtual conversations could deepen the sense of disconnection when used excessively.
The study highlighted that this concentrated group was responsible for producing a disproportionately large share of what the authors termed “affective cues.” While the exact definition of that phrase remained somewhat ambiguous in the report, it generally refers to verbal or nonverbal signals that reveal a speaker’s emotional state—expressions of vulnerability, loneliness, dependence, or shifts in self‑esteem. By analyzing this data for patterns, the researchers illuminated the ways in which prolonged or problematic reliance on chatbots might subtly shape users’ inner landscapes, potentially reinforcing cycles of emotional dependence rather than mitigating them.
These findings are significant in light of the guidance that OpenAI itself has publicly issued regarding responsible use of ChatGPT. The company has cautioned against leaning on conversational AI for deeply personal dilemmas, particularly within the realm of romantic relationships, where choices often carry profound emotional weight and long‑lasting consequences. OpenAI emphasized that the model should not provide direct instructions on intimate decisions, such as whether or not to end a partnership. Instead, it is being adapted to function more like a reflective companion—posing clarifying questions, presenting a balanced examination of pros and cons, and ultimately encouraging human users to exercise their own judgment. The company has even begun rolling out new behaviors specifically designed for moments of high‑stakes personal decision‑making, with the intention of steering conversations toward thoughtful reflection rather than prescriptive directives.
Taken together, Hinton’s wry anecdote, the evidence gathered by academic and industry researchers, and the evolving policies of OpenAI all highlight a striking cultural reality: artificial intelligence is no longer a distant or abstract innovation confined to laboratories and industry. It is now woven into the texture of everyday life, influencing not only how we work and solve logistical problems, but also how we navigate human intimacy, vulnerability, and choice. In this convergence of algorithms and emotion lies both promise and peril, raising profound questions about whether and how society should allow machines to mediate the most private corners of human existence.
Sourse: https://www.businessinsider.com/geoffrey-hinton-ai-girlfriend-breakup-chatgpt-openai-2025-9