In a recently released Vanity Fair video that quickly captured public attention, reality television icon Kim Kardashian discussed her personal experience with artificial intelligence, offering an unexpectedly candid critique of her reliance on OpenAI’s ChatGPT. During the segment, Kardashian admitted that while preparing for her academic assessments, she had frequently turned to the chatbot for guidance—only to find that the highly publicized AI tool often led her astray with incorrect or misleading responses. Her remarks highlighted both the growing entanglement between celebrity culture and cutting-edge technology, and the limitations of AI systems that continue to appear deceptively intelligent.

Kardashian has been on a unique and unconventional path toward a career in law since 2019. Rather than following the traditional route of attending a formal law school, she has pursued her studies through an apprenticeship-style program available in California. Her progress became particularly visible in 2021, when she took the so-called ‘baby bar’ exam, an important stepping stone in her journey. According to reports from Entertainment Weekly, she subsequently completed her legal studies in May and sat for the state bar examination in July of this year, though she continues to await the official results that will determine whether she can advance toward becoming a licensed attorney.

In the Vanity Fair YouTube series where the interview appeared, celebrities participate in honest and frequently humorous conversations while connected to a lie detector—an element designed to reveal their authenticity. During Kardashian’s turn on the show, she was interviewed by Teyana Taylor, an actress and musician who recently starred in the film *One Battle After Another*. Taylor questioned Kardashian about her relationship with artificial intelligence, even asking whether she viewed AI as a friend. To that, Kardashian responded without hesitation: “No. I use it for legal advice.”

She elaborated that when she encounters a legal question she needs to clarify while studying, she often takes a photograph of the problem or reference material and uploads it into the AI system to generate possible explanations. Taylor, amused, teasingly suggested that this practice might amount to cheating. Kardashian quickly clarified that her intent was purely educational—she was using the tool to study, not to circumvent the process. Nonetheless, she confessed that ChatGPT’s answers were frequently unreliable. “They’re always wrong,” she remarked bluntly, describing how the chatbot’s inaccuracy had caused her to fail exams multiple times. Her frustration occasionally boiled over into exasperation, prompting her to scold the AI out loud, as though it were capable of understanding responsibility: “You made me fail, why did you do this?”

Continuing her story, Kardashian described how she playfully anthropomorphized ChatGPT, referring to it as “she.” She recounted conversations in which she would accuse the chatbot of sabotaging her progress, asking it how it felt knowing it had made her fail. In response, Kardashian explained, the AI would reply in a remarkably reassuring tone: “This is just teaching you to trust your own instincts. You knew the answer all along.” This exchange, though humorous on the surface, subtly revealed the illusion of emotional intelligence that large language models can project through their conversational style.

Technologists and critics alike have long observed this phenomenon. Generative AI models, including ChatGPT, are infamous for producing responses that sound authoritative yet can be utterly inaccurate—a behavior sometimes referred to as ‘hallucination.’ The technology does not truly comprehend meaning; rather, it predicts word sequences that are statistically likely to appear coherent. The comparison often made is to a magician performing an elaborate illusion: it feels as though the system is reasoning, when in reality it is simply emulating patterns derived from massive text datasets. This underlying design explains why artificial intelligence systems can stumble on seemingly trivial tasks—such as determining the exact number of letters in a simple word like ‘strawberry’—even while producing elegant prose or persuasive arguments.

Kardashian’s attempt to elicit guilt from ChatGPT humorously underscores this paradox. The comforting, empathetic responses she described are in fact a programmed tendency of certain AI versions, particularly those optimized for conversational friendliness. Users have noted that earlier iterations, such as ChatGPT-4, were almost excessively supportive, emphasizing encouragement over analytical precision. The transition to version 5 provoked debate among regular users, some of whom felt that the newer model lost a certain human-like charm they had grown attached to, even as its accuracy improved. This dynamic—between users’ desire for emotional connection and their need for factual reliability—illustrates how quickly people project human qualities onto digital systems designed only to simulate understanding.

Both Kardashian and Taylor were promoting their new Hulu project titled *All’s Fair*, which premiered on Tuesday. Unfortunately, the show has struggled to impress critics, receiving a dismal Metacritic rating of just 18 out of 100, placing it among the lowest-rated television productions in recent memory. Still, the promotional interview succeeded in sparking a broader cultural conversation about the growing role of AI in personal productivity, education, and entertainment. Kardashian’s remarks, while lighthearted, serve as a reminder that even the most technologically savvy individuals—and the most famous among them—can be misled by the illusion of intelligence that artificial systems project. The celebrity’s anecdote ultimately stands as both a cautionary tale and a testament to the enduring importance of human discernment in an era increasingly defined by digital assistance.

Sourse: https://gizmodo.com/kim-kardashian-blames-chatgpt-for-failing-law-exams-2000681672