The recently released Rubrik ZeroLabs survey, highlighted by ZDNET, paints a sobering portrait of how artificial intelligence agents are spreading throughout enterprise environments faster than organizations can manage or secure them. According to the findings, only about one in four IT managers—precisely 23%—claim to maintain full control over the AI agents in their systems. This means that in the vast majority of instances, these intelligent assistants, which were originally intended to streamline workflows and enhance efficiency, are instead becoming sources of additional oversight complexity and potential risk. In fact, a significant 81% of respondents report that the agents under their authority now demand more manual auditing, monitoring, and corrective work than the time savings they were meant to deliver. The report further emphasizes that existing security safeguards are proving inadequate, leaving vulnerabilities that could easily be exploited if agent behavior goes unchecked.

The ease with which AI agents can be developed and deployed has dramatically accelerated this issue. The report describes a troubling trend: in the pursuit of convenience or productivity, users often bypass crucial security measures—such as disabling VPNs or circumventing existing IT controls—to generate and use these agents as digital assistants. This behavior leads to an explosion of unapproved and unmonitored AI applications, both internally created and distributed by external vendors. The pattern observed mirrors the early days of cloud technology adoption, when departmental teams rushed to implement their own platforms independently. As noted by Kriti Faujdar, a senior product manager at Microsoft, this proliferation—or “agent sprawl”—creates an ecosystem riddled with fragmentation, inconsistent governance, and hidden security exposures. Each of these problems compounds, eroding organizational control and trust in the very systems that were designed to enhance agility.

The ZeroLabs researchers reveal a profound gap between how IT leaders perceive their command over AI agents and the operational realities on the ground. Nearly nine out of ten managers, or 86%, anticipate that the proliferation of autonomous agents will soon exceed the limits of existing security frameworks, with alarming speed—over half expect this imbalance to become critical within just six months. Additionally, most organizations admit they do not possess robust rollback or “undo” mechanisms to reverse unintended actions initiated by agents, leaving them exposed to cascading errors or unauthorized data alterations.

Experts warn that this uncontrolled growth presents an escalating management crisis. As Nik Kale, principal engineer at the Coalition for Secure AI, explains, any technically skilled team with access to an API can launch an AI agent in a single afternoon. Scaled across a large enterprise, that accessibility quickly results in hundreds of independent agents operating simultaneously, often with overlapping permissions, divergent identity models, and no centralized registry to define who owns or monitors each one. Without strong observability into their actions, even identifying the source of a misbehavior or data breach becomes exceedingly difficult. The ZeroLabs report therefore stresses the growing importance of comprehensive telemetry—systematic monitoring tools that can trace the full sequence of decisions and operations an agent executes, ideally bolstered by security enforcement points throughout those chains of activity.

To establish more effective oversight, the authors propose five key post-deployment questions that every organization should be able to answer to determine whether an AI agent is functioning safely and productively. First, what did the agent actually do? This involves tracing and reconstructing each action or outcome, akin to replaying a video of its decision-making process. Second, why did the agent do it? Grasping the reasoning or data inputs that guided each step helps uncover logic flaws or unintended influences. Third, what did the agent interact with? A detailed audit trail must list every dataset, file, service, or tool the agent accessed. Fourth, did it achieve its objective responsibly and efficiently—measured not only through task success and return on investment, but also through incident frequency, human intervention triggers, and possible compliance violations. Finally, where did the agent fail, and can this failure be faithfully reproduced for diagnostic purposes? The report concludes that, in many organizations, these foundational questions remain unanswered, undermining efforts to define responsible agent behavior, enforce access boundaries, or implement reliable rollback capabilities.

Underlying this situation is a fundamental trade-off between speed and governance. As Faujdar points out, while companies are eager to capitalize on AI’s acceleration of business processes, the absence of firm guardrails introduces severe risks to trust, auditability, and scalability. To thrive in this evolving environment, organizations must treat agent management not as a secondary operational concern but as a critical discipline—one embedded deeply into infrastructure, policy, and culture. Renze Jongman, founder and CEO of Liberty91, adds another layer of urgency, noting that AI agents evolve unpredictably over time as their underlying models drift. The version that passes certification one quarter may behave materially differently the next, not because of intentional updates, but due to continuous model learning. Therefore, any governance framework must assume ongoing change and build mechanisms to adapt dynamically.

Nik Kale also cautions against overdependence on single-vendor ecosystems. When the orchestration, model, and governance layers of an AI system are all housed within one provider’s platform, an organization effectively hands over control of its agent’s logic, permissions, and accountability in one contract—an arrangement that centralizes both power and liability. Instead, Kale advocates a layered approach to oversight that brings together security specialists, enterprise architects, and business unit leaders to jointly steward AI initiatives. Responsible agent oversight, he stresses, should never be confined to the development team’s desire for rapid deployment. Rather, it must encompass those accountable for strategic outcomes, compliance obligations, and organizational integrity. In short, the future of AI-enabled enterprises will hinge on their ability to combine speed with discipline—maintaining innovation while ensuring every agent’s actions remain traceable, secure, and ultimately aligned with human intent.

Sourse: https://www.zdnet.com/article/it-managers-say-ai-agents-are-out-of-control/