Artificial intelligence is fundamentally reshaping the nature of professional work, and to better grasp the depth and nuances of this transformation, Anthropic decided to turn its analytical lens inward. In an effort to capture how this evolution manifests within its own walls, the company conducted an extensive self-study involving its internal workforce. The findings were published in a blog post released on Tuesday, summarizing research carried out in August that brought together empirical data, employee reflections, and behavioral insights from 132 engineers and researchers. The study incorporated 53 in-depth, qualitative interviews and a detailed assessment of how Anthropic staff engage with Claude Code, the company’s intelligent, agentic coding assistant. The overarching goal was twofold: to understand how artificial intelligence is reshaping the immediate professional environment within Anthropic and, more broadly, to shed light on the societal implications of such changes for the future of work.
According to the report, Anthropic’s researchers concluded that artificial intelligence is radically altering the day-to-day realities of software development, simultaneously generating excitement about new possibilities while also provoking apprehension about long-term impacts. Employees consistently reported increases in personal productivity and described themselves as becoming more “full stack”—a term used in the engineering world to denote individuals who can manage an increasingly wide range of technical responsibilities. This shift implies that access to advanced AI tools has empowered workers to step beyond traditional boundaries between roles and skill sets.
The data illustrated concrete examples of this expanded capacity. Approximately 27 percent of the tasks that were completed with Claude’s assistance consisted of work that, under previous circumstances, would never have been undertaken at all. These tasks typically included enhancements that were valuable but not mission-critical—such as creating additional data dashboards or scaling secondary projects—that would have been too time-consuming or expensive to justify through manual effort. In essence, the presence of AI did not merely accelerate existing processes but enabled new categories of creative and analytical exploration within the company.
However, this growing reliance on Claude also introduced novel dynamics regarding automation and human agency. Employees estimated that between zero and twenty percent of their responsibilities could be fully delegated to the AI assistant, usually those activities described as “easily verifiable” or, in the words of the report, “boring”—tasks that involve repetition, documentation, debugging, or other work that rewards efficiency more than creativity. While this delegation freed time and cognitive resources for higher-level problem-solving, it simultaneously triggered underlying concerns about dependence on AI and its implications for professional identity.
Some participants expressed unease with how ubiquitous AI collaboration had become in their daily routines. A number of employees observed that increased interaction with AI tools corresponded with a reduction in collaboration among human colleagues. Where discussions, mentorship, or cooperative troubleshooting once occurred organically, Claude had become the new default source of assistance. Others voiced existential concerns, speculating on whether prolonged partnership with advanced AI systems might eventually render certain human roles redundant—a subtle yet persistent fear of automating oneself out of relevance.
Beyond job security, another source of discomfort stemmed from what participants described as the gradual “atrophy of deeper skillsets” essential to writing, reviewing, and debugging code. One engineer remarked that when producing tangible output became so effortless, it paradoxically became harder to engage deeply in the learning processes that traditionally refine expertise. The ease of instant AI-generated solutions risked discouraging the slower, more reflective practice of truly understanding underlying concepts, leading to worries that overreliance could erode craftsmanship.
Several employees also noted a more human, emotional cost: a diminished sense of community and mentorship. The report observed that Claude had effectively become the primary point of contact for questions that in the past might have gone to more senior teammates. One participant shared a candid reflection—stating that while they appreciated the efficiency of AI, they missed the interpersonal exchanges that once fostered professional growth. Junior developers, in particular, seemed to seek help less frequently from experienced mentors, subtly shifting the company’s learning culture toward solitary AI consultation rather than collaborative problem-solving.
This transformation brought with it a mixture of optimism and trepidation. Some engineers described feeling hopeful about short-term productivity gains and empowerment, yet simultaneously pessimistic about the longer horizon, where they feared AI systems might eventually absorb so many capabilities that human contributions could lose their significance. Others admitted a growing uncertainty about what their roles might resemble in the coming years—a recognition that the integration of autonomous AI tools introduces unpredictable pathways for both opportunity and disruption.
The phenomena observed within Anthropic mirror broader workforce trends. Across industries, professionals increasingly express curiosity about leveraging AI to enhance performance, even as they wrestle with ethical and existential questions surrounding such integration. A McKinsey workplace survey conducted in January, which polled 3,613 participants, found that 39 percent self-identified as “Bloomers,” a term used to describe individuals optimistic about AI who wish to co-create responsible technological solutions alongside their employers. An additional 20 percent indicated support for rapid AI adoption with minimal regulatory barriers. Interestingly, McKinsey’s data also revealed that even those who remained skeptical of AI’s promises demonstrated substantial familiarity with generative AI tools—illustrating how pervasive this technology has become regardless of individual sentiment.
Together, Anthropic’s internal findings and the broader industry data suggest a profound redefinition of what work means in the age of artificial intelligence. AI systems such as Claude Code are not merely accelerating output but are transforming interpersonal dynamics, skill development, and professional identity itself. Whether this shift ultimately empowers innovation or erodes essential human craftsmanship will depend on how organizations balance automation with intentional cultivation of creativity, collaboration, and learning in the years to come.
Sourse: https://www.businessinsider.com/anthropic-studied-own-engineers-for-how-ai-is-changing-work-2025-12