Artificial intelligence corporations have recognized that children and young adults are not only the symbolic future of society but also the literal cornerstone of their business strategies. These companies understand that shaping the habits and dependencies of students today practically guarantees tomorrow’s loyal customers. Their marketing efforts make no attempt to conceal this ambition. Through targeted promotions, strategic discounts, and referral-based incentives, such firms deliberately weave themselves into the everyday fabric of student life. For example, OpenAI’s seemingly benevolent campaign—announcing, “Here to help you through finals,” while distributing free ChatGPT Plus subscriptions to college students—illustrates how marketing and pedagogy are now deeply intertwined. Similarly, Google and Perplexity grant students yearlong access to premium AI tools that are typically quite expensive, while Perplexity sweetens the exchange by rewarding users with $20 for each U.S. student referred to download its AI-powered browser, Comet.
The enthusiasm surrounding AI tools among teenagers and college students has skyrocketed to astonishing levels. This widespread adoption, however, carries lasting consequences within the educational ecosystem. As these AI tools permeate classrooms, teachers find themselves struggling to confront an ever-evolving landscape of digital shortcuts, while students risk losing the ability to develop authentic critical thinking or independent learning skills. Educators warn that the very principle of “learning how to learn” is being eroded. The introduction of even more sophisticated technology—AI agents capable of autonomously completing online tasks—has exacerbated these challenges. Although these agents currently operate somewhat slowly, as noted by independent tests conducted by *The Verge*, their mere existence has tilted the scales further toward academic dishonesty by automating the process of cheating. Yet, instead of taking responsibility for these predictable misuses, technology companies often attempt to deflect blame, directing public criticism back toward the students who are merely exploiting functions that the companies themselves designed and promoted.
Perplexity’s public image provides a striking case study in this dynamic. Rather than distancing itself from allegations of encouraging academic misconduct, the company has appeared to embrace that identity. In early October, promotional material on Facebook featured a young actor playing a student who bragged about peers using Comet’s AI-powered agent to complete multiple-choice assignments. A parallel Instagram advertisement released on the same day depicted another actor assuring students that the web browser could even take quizzes autonomously, followed by a tongue-in-cheek disclaimer—“But I’m not the one telling you this.” When a video emerged on the social platform X showing the AI agent completing an online homework task, Perplexity’s CEO, Aravind Srinivas, reposted it himself, adding a sardonic note: “Absolutely don’t do this.” When pressed about these issues, company spokesperson Beejoli Shah defended the product by invoking precedent, arguing that every learning innovation—from the ancient abacus onward—has been exploited for dishonest purposes and that those who cheat ultimately deceive only themselves.
As autumn arrived following what many observers dubbed the AI industry’s “agentic summer,” educators began to share evidence of these autonomous agents infiltrating learning management systems. Viral videos showed OpenAI’s ChatGPT agent automatically generating essays and submitting them via Canvas—one of the most widely used educational dashboards—while Perplexity’s AI assistant adeptly handled quizzes and short essays. In one particularly unsettling video, instructional designer Yun Moh observed that ChatGPT’s agent introduced itself under his name in a class icebreaker assignment. The episode, he told *The Verge*, left him stunned: “It actually introduced itself as me.”
Given that Canvas serves millions of users—among them every Ivy League institution and nearly half of U.S. K–12 districts—Moh petitioned its parent company, Instructure, to prevent AI impersonations on their platform. Uploading evidence to Instructure’s community forum and emailing company representatives, he emphasized the danger of “potential abuse by students.” Yet nearly a month passed before the executive team offered a formal reply. Their response reframed the problem not as a technical flaw but as an ideological challenge. Rather than pursuing direct prohibitions, they advocated for what they described as a forward-thinking stance: developing new, pedagogically valid frameworks to integrate AI responsibly into education. According to their statement, progress should not be hindered by fear, and while academic integrity remained a priority, the company aspired to design transformative technologies that inspire transparency and unlock new methods of teaching and learning.
Instructure later clarified to *The Verge* that technical limitations made it impossible to fully restrict external AI agents. As spokesperson Brian Watkins explained, while certain authentication boundaries exist for third-party software, locally installed programs operating on a student’s personal device cannot be fully neutralized. Consequently, no measure could “completely disallow AI agents.” Moh’s IT team encountered similar frustrations. Their attempts to identify “agentic behaviors”—such as suspiciously swift submission patterns—proved futile, since these agents continually refine their operations to avoid detection. As Moh lamented, their adaptability renders them “extremely elusive to identify.”
Complicating matters further, Instructure’s attitude toward outside tools has not been entirely consistent. Only weeks after finalizing a partnership agreement with OpenAI, the company denounced a different AI function accused of promoting cheating: Google Chrome’s experimental “homework help” feature, as reported by *The Washington Post*. The tool enabled users to run an image search through Google Lens directly within browser windows, allowing quiz questions displayed on learning platforms like Canvas to be effortlessly decoded. Alarmed educators raised the issue in Instructure’s community forum, leading Google to suspend the trial. According to spokesperson Craig Ewer, this tool was merely a test intended to simplify visual learning, not a promotion of academic dishonesty. Nevertheless, the company paused the rollout to integrate early feedback—though future iterations of similar tools remain plausible, especially given Google’s own marketing blogs boasting about Chrome’s usefulness for students.
Educators have observed that some AI agents occasionally refuse to perform overtly academic tasks, exhibiting minimal built-in ethical guidance. Yet even these weak safeguards are easily overridden. As college English instructor Anna Mills demonstrated, OpenAI’s Atlas browser could be instructed to complete assignments despite supposed restrictions. She likened the current educational climate to “the wild west,” a digital frontier where rules lag far behind capability. For that reason, both Mills and Moh argue passionately that AI companies must accept genuine accountability rather than transferring responsibility to students. Their position mirrors that of the Modern Language Association’s AI task force, on which Mills serves, which publicly urged corporations to grant educators greater authority over how AI tools operate within classrooms.
OpenAI, meanwhile, has attempted to balance its commercial interests with a reputable stance on ethics. Seeking to distance itself from allegations of facilitating cheating, the company introduced a non-answering “study mode” in ChatGPT and publicly reaffirmed that its products should not devolve into mere “answer machines.” Leah Belsky, OpenAI’s vice president of education, articulated the organization’s philosophy: true education, she explained, is preparing students for a rapidly transforming world, one in which AI will fundamentally define the nature of work, skills, and opportunity. Therefore, the educational community bears a collective duty to harness these tools constructively—to augment genuine learning rather than undermine it—and to reimagine both pedagogy and assessment in the context of AI’s ubiquity.
Instructure, for its part, reiterates its commitment to innovation over prohibition. Watkins emphasized that the company does not intend to “police the tools” but instead seeks to reconceptualize what learning itself can become in an AI-infused environment. Their envisioned solution parallels OpenAI’s proposed approach: a cross-sector collaboration among technology developers, academic institutions, educators, and students to define what “responsible AI use” should mean in practice. Yet this vision remains embryonic, more aspirational than actionable. In the meantime, the burden of applying whatever ethical frameworks eventually emerge will inevitably fall upon teachers—the frontline practitioners who must maintain integrity within classrooms transformed by technologies that continue advancing faster than regulation can catch up. Commercial contracts have been inked and powerful products deployed long before the necessary educational guidelines have matured. Reversing that momentum seems virtually impossible now.
The unfolding story underscores a critical tension at the heart of technological progress: the drive to innovate colliding with the imperative to preserve the essence of learning itself. Beneath the polished rhetoric of empowerment and efficiency lies a deeper question—are AI companies educating the next generation, or simply engineering dependency that ensures a profitable future?
Sourse: https://www.theverge.com/ai-artificial-intelligence/812906/ai-agents-cheating-school-students