Emilija Manevska/Moment/Getty Images
Follow ZDNET:
Add us as a preferred source on Google.
ZDNET’s key takeaways
Linus Torvalds, together with the maintainers of the Linux kernel, has adopted a notably pragmatic and methodical stance toward the integration of artificial intelligence tools within the kernel’s vast development process. Their central viewpoint is straightforward yet deeply principled: regardless of advances in AI, particularly in large language models (LLMs), the true responsibility for Linux’s code remains firmly in human hands. The message is clear — those who attempt to manipulate or insert patches into the Linux codebase using AI without due care or honesty risk severe consequences, both technically and reputationally.
After extended and often intense discussions stretching across months, Torvalds and his team have established the kernel’s first formalized and legally recognized framework addressing AI-assisted code contributions. This new policy exemplifies Torvalds’s characteristic pragmatism, aiming to balance two forces that are often in tension — the eagerness to embrace the capabilities of modern AI development tools and the community’s uncompromising standards for code quality, security, and licensing integrity.
At the heart of this policy are three foundational principles that guide its enforcement and expectations:
**1. AI agents cannot use Signed-off-by tags.** Under the new rules, only a human developer can lawfully certify compliance with the Linux kernel’s Developer Certificate of Origin (DCO), which serves as the legal and ethical guarantee that contributed code respects all relevant licenses. In practical terms, even if a portion or the entirety of a patch originates from an AI system, the human submitter — not the AI model, and certainly not its creator or vendor — remains the sole bearer of legal and ethical accountability.
**2. Mandatory Assisted-by attribution.** Developers employing AI tools must disclose their usage by including an Assisted-by tag detailing which models, agents, or auxiliary systems assisted in creating the code. A typical example might read: “Assisted-by: Claude:claude-3-opus coccinelle sparse.” This explicit attribution not only maintains transparency but also provides maintainers with crucial contextual information about how the code came to be.
**3. Full human liability.** By combining these provisions, the policy underscores that every contributor must personally review AI-generated material, confirm licensing compliance, and verify security and correctness. Should bugs, vulnerabilities, or malicious code surface later, the human contributor alone bears the repercussions. Those who might consider slipping problematic patches into the kernel, echoing the infamous University of Minnesota experiment in 2021, would inevitably face exclusion from not just the kernel project but possibly from other reputable open-source communities as well.
The Assisted-by field thus plays a dual purpose: it creates an explicit trail of transparency while simultaneously flagging code for potentially deeper and more rigorous examination during review. Kernel maintainers can therefore scrutinize AI-assisted submissions more carefully—without stigmatizing developers who use such tools responsibly.
The journey toward this policy’s creation was not without controversy. A notable catalyst occurred when Sasha Levin, a highly respected Nvidia engineer and core kernel maintainer, submitted a patch for Linux 6.15 that had been generated in its entirety by AI — even including the changelog and the associated tests. Although Levin had personally audited and validated the code before submission, he initially refrained from revealing the AI’s involvement. This omission drew immediate and sharp criticism from fellow maintainers, igniting a heated debate regarding transparency and authorship in the age of machine-assisted development.
The uproar ultimately led to a constructive outcome. At the 2025 North America Open Source Summit, Levin publicly advocated for official AI disclosure guidelines and, later that year, introduced the first draft that evolved into the kernel’s AI policy. His early proposal included a Co-developed-by tag for AI-generated or AI-aided patches. Lively debate ensued across both face-to-face meetings and the ever-active Linux Kernel Mailing List (LKML) as community members weighed whether the terminology should distinguish between code generated by an AI and code developed collaboratively with human oversight. Eventually, the term Assisted-by emerged as the consensus choice, capturing the proper nuance — AI as a supporting instrument, not as a co-author.
According to senior maintainer Greg Kroah-Hartman, this decision arrives at a moment when AI development assistants have genuinely matured. In his words, “something happened a month ago, and the world switched.” Where AI tools once produced unreliable or fantastical results, many now yield actionable insights and even credible security analyses. Nonetheless, the kernel project insists on acknowledging these systems purely as aids to human developers, never as creative entities deserving co-authorship.
The selection of Assisted-by over alternatives like Generated-by was deliberate and informed by several considerations. First, it accurately reflects the nature of current AI involvement; most contributions rely on AI for incremental tasks — suggesting refactors, filling in boilerplate, or generating tests — rather than full-fledged code creation. Second, using a similar tag structure to existing ones such as Reviewed-by, Tested-by, or Co-developed-by ensures consistency within the developer workflow. Third, the expression ‘Assisted-by’ avoids unintended connotations that might mark such contributions as inferior or untrustworthy, aligning with Torvalds’s insistence that AI remain “just a tool.”
Indeed, Torvalds articulated this view plainly during discussions on LKML. He emphasized his reluctance to allow any AI-related statement to dominate the kernel’s documentation or philosophy, warning against both doomsaying and hype. He reaffirmed that AI should be treated pragmatically — neither demonized as a threat to human creativity nor romanticized as a revolutionary force destined to replace humans. His exact intent was to embed AI assistance deeply in practice but lightly in significance: as one tool among many, subordinate to human discernment.
Yet, even with formal rules in place, practical enforcement remains a challenge. Rather than adopting algorithmic detection tools to identify AI-written code, the Linux community continues to rely on what has always been its greatest asset: collective human expertise. Maintainers leverage pattern recognition, stylistic intuition, and decades of technical experience to sense when a patch doesn’t ‘feel right.’ As Torvalds noted in a memorable 2023 comment, the ability to judge the taste and craftsmanship of another person’s code is itself a skill — a kind of artistic literacy developed through years of disciplined coding.
The real difficulty, Torvalds explains, is not filtering out low-quality ‘AI slop’ but identifying the sophisticated patches that appear flawless at first glance yet hide subtle bugs or long-term maintenance pitfalls. Poor patches, human or AI-generated, are easy to reject. The insidious ones are those that blend so well into the codebase that only expert scrutiny can discern their flaws.
Hence, the new policy derives its power less from mechanical enforcement and more from social and professional deterrence. Developers know that should they be discovered submitting dishonest or low-quality code — especially code disguised as human-written when it was not — the consequences will be severe. While Torvalds has mellowed compared to his earlier years, his reprimands remain feared and respected within the community. Any developer who attracts his disapproval for violating community ethics risks lasting reputational damage.
In conclusion, the Linux kernel’s AI policy represents a deeply reasoned compromise between innovation and integrity. It acknowledges the undeniable utility of AI in accelerating development while reaffirming that ultimate responsibility will always rest on human shoulders. The underlying philosophy is timeless: tools may evolve, but accountability must not. Through clear attribution, full transparency, and unwavering human oversight, the policy seeks to ensure that Linux — the world’s most influential open-source project — continues to embody trust, technical excellence, and collective stewardship in the era of intelligent machines.
Sourse: https://www.zdnet.com/article/linus-torvalds-and-maintainers-finalize-ai-policy-for-linux-kernel-developers/