When the artificial intelligence program that Jacob Adamson—an experienced senior software engineer—relies upon as an essential part of his daily workflow unexpectedly froze, the moment carried an oddly symbolic weight. As the screen stalled, so too did his fingertips, hovering helplessly over the keyboard. In that brief pause, Adamson became acutely aware of how dependent he had grown on the powerful technology that had once simply augmented his work. For a few seconds, the autonomy and mastery he had honed through years of practice seemed to slip from memory, leaving him uncertain of how to complete what should have been a routine code-writing task without digital assistance. Reflecting on that unsettling instant, he admitted he felt a kind of cognitive stiffness—much like the sensation one experiences when returning to a complex project after several days away. “It was as if the skills had rusted slightly,” he confessed, describing how artificial reliance can quickly dull even the most cultivated technical reflexes.
From this personal realization emerged a broader professional concern. Adamson now worries that the same subtle dependency might quietly take hold among the five software engineers he supervises at Varonis, a company that specializes in securing sensitive data. Determined to preserve his team’s foundational capabilities, he has begun considering proactive countermeasures. Among them is the idea of instituting occasional simulation exercises—coding drills in which engineers must rely solely on their own logic, creativity, and problem-solving skills, deliberately abstaining from AI-generated suggestions or auto-completions. Such exercises, he believes, are essential to keeping their cognitive edges sharp and ensuring they can still function independently of technological scaffolding. “We’re very good at what we do,” Adamson emphasized, “but this kind of tool can subtly lull us into complacency, drawing us into a comfort zone where dependence creeps in almost without our noticing.”
His unease reflects a growing tension across industries as AI tools become nearly omnipresent in modern workplaces. From data interpretation and project summarization to code generation and document drafting, artificial intelligence now enables professionals to accomplish complex tasks in a fraction of the time they once required. While executives are eager to showcase the efficiency gains such systems deliver, some are increasingly concerned about the potential price of this new convenience: a gradual erosion of the very human expertise that automation was designed to support. This tension—between productivity gains and the slow drain of practical skill—has become a central paradox of the AI era.
A recent study from the University of Pennsylvania’s Wharton School of Business illustrates this duality vividly. Based on a comprehensive survey of nearly eight hundred senior decision-makers representing large U.S. corporations—each employing over a thousand workers and generating annual revenues exceeding fifty million dollars—the report unveils a telling pattern. Roughly three-quarters of respondents praised AI tools for substantially increasing operational efficiency. Yet even among these supporters, nearly half—about forty-three percent—acknowledged an unsettling side effect: the possibility of skill atrophy, a phenomenon in which workers gradually lose their ability to execute essential tasks unaided. Jeremy Korst, a partner at GBK Collective, the consulting firm that collaborated with Wharton on the research, summarized the dilemma succinctly: “Leaders are conflicted,” he explained, “over whether AI is empowering employees or quietly becoming a crutch that diminishes professional capability.”
That concern resonates with other managers as well. Sandor Nyako, who oversees a team of approximately fifty software engineers at a large technology corporation, appreciates the obvious advantages artificial intelligence can bring. He sees firsthand how algorithmic tools enable programmers to accomplish once-cumbersome assignments in significantly less time. Yet Nyako deliberately sets boundaries. He cautions his team not to become overly reliant on AI when tackling complex, intellectually demanding problems—those that require genuine critical reasoning, analytical dexterity, and creative insight rather than merely procedural efficiency. “If we start depending too heavily on these systems,” he warned, “we risk intellectual stagnation. We stop pushing ourselves and plateau at whatever level AI props us up to.” In his view, independent thought and self-directed problem-solving are cornerstones of both professional development and human progress. “Skills grow through adversity,” Nyako observed. “People need to wrestle with the difficulty of thinking through problems to expand their mental capacity. Without that struggle, how can anyone even recognize when an AI-generated answer is incorrect?”
Not everyone, however, interprets this transformation as inherently problematic. Some advocates argue that the automation-driven disappearance of certain manual or cognitive abilities is simply the natural progression of human innovation. Phil Gilbert, former head of design at IBM, offers a striking analogy: very few people today can ride a horse proficiently, yet the world continues to move efficiently thanks to the automobile. For him, tools like AI merely shift which skills society values without diminishing overall effectiveness. “What matters most are the outcomes,” Gilbert noted. “As long as we arrive at the desired result—be that a functioning piece of code, a compelling report, or a refined design—the exact methods or inputs used along the way are secondary.” He further emphasized that leveraging AI to enhance performance does not negate the importance of understanding one’s craft. Rather, it should be viewed in the same spirit as using a dictionary or spell-checker: augmentations that accelerate efficiency without erasing the underlying human knowledge required to use them responsibly. “Just as every writer benefits from basic competence in spelling and grammar,” he explained, “so should every professional understand the fundamentals of their field before delegating tasks to artificial systems.”
From yet another perspective, the emergence of AI may actually challenge workers to acquire new proficiencies instead of rendering them obsolete. Bob Chapman, chairman and former chief executive officer of Barry-Wehmiller, a global manufacturing company, suggests that future employees will need to master new forms of expertise—most notably, the ability to design precise and effective prompts that instruct AI systems to produce accurate and relevant outputs. “Learning how to use AI effectively will itself become a critical skill,” Chapman argued, implying that education must evolve to prioritize practical digital literacy over some traditional academic subjects. “Honestly, I couldn’t tell you much about my high school chemistry class anymore,” he added humorously, “but knowing how to communicate with AI will likely matter far more in the coming decades.”
Still, others insist that no wave of innovation can justify letting foundational knowledge decay. They worry that in the near future, an entire generation of professionals will mature in an environment where artificial intelligence is omnipresent, having never developed the baseline competencies once considered indispensable. Adamson, reflecting on his own brief struggle, anticipates precisely this risk. “Future engineers may simply never build the same intuitive skill set that I had to develop early in my career,” he warned. That gap, he argued, is particularly concerning because AI, despite its dazzling capabilities, is far from infallible. “These systems can still produce errors, distort data, or misinterpret context,” he explained. “Someday AI may overcome these limitations, but for now, it’s a potent tool that remains fallible—and that means the human brain must stay in practice.”
Sourse: https://www.businessinsider.com/leaders-worry-about-skill-atrophy-due-to-ai-adoption-2025-10