Weiquan Lin/Moment via Getty Images

Follow ZDNET: Add us as a preferred news source on Google.

ZDNET’s Essential Takeaways:
The deployment of AI agents marks a fundamental departure from conventional software release strategies. Unlike traditional programs, whose functionalities are neatly contained within well-defined parameters, agents are dynamic entities capable of independent decision-making. Consequently, governance and oversight cannot be treated as secondary concerns or retrofitted after deployment; they must be intrinsic components of the design from inception. Within this evolving space, a new paradigm — commonly described as ‘AgentOps’ — is emerging to manage these intelligent systems effectively.

While public enthusiasm for AI agents often borders on exuberance, practitioners must remember that success in this domain relies less on hype and more on rigorous groundwork. The operationalization of such systems demands structured planning, resilient infrastructure, and a measured balance between freedom and control. Leaders should empower their agentic systems to act autonomously where appropriate, but within clearly defined limits. Likewise, organizations must reassess traditional metrics such as return on investment, since agentic automation introduces new dimensions of value and risk management that transcend conventional cost-benefit models.

In her analysis for MIT Sloan Management Review, Kristin Burnham synthesizes findings from collaborative research between MIT Sloan and the Boston Consulting Group. She emphasizes that the effective design and governance of AI agents hinge on prudently managing tensions between control, creativity, and operational efficiency. If developers constrain their agents excessively, they risk suffocating their potential to learn and act resourcefully. Conversely, granting too much autonomy can expose an organization to unpredictability and risk, producing outcomes that defy intended goals. Burnham further notes that agentic technologies compel enterprises to reimagine their approaches to assessing costs, scheduling deployment timelines, and defining success; conventional ROI calculations are no longer sufficient. Finally, organizations face a strategic crossroads: whether to retrofit AI agents into legacy workflows for speed, or to reconstruct those processes entirely to harness the full potential of agentic capability.

Industry Consensus and New Lessons:
Across the technology landscape, there is growing agreement that agentic systems necessitate a reorientation of engineering methodologies. These are not static applications but evolving participants in enterprise ecosystems. As organizations experiment and iterate, pioneering teams are documenting new insights. ZDNET spoke with several industry leaders who distilled their field experiences into seven key lessons that are redefining modern AI practice.

1. Governance Matters — Profoundly
Nik Kale, a principal engineer at Cisco, recalls leading the deployment of AI agents intended to deliver highly technical advice to more than 100,000 users. Early prototypes exhibited a critical flaw: they responded with great confidence yet occasionally supplied inaccurate information. This overconfidence, though superficially reassuring, carried reputational and operational risk. To mitigate this, Kale’s team invested heavily in knowledge-grounding techniques and retrieval-based validation systems to anchor responses to verified data. His primary conclusion was unequivocal: governance must be architected from the beginning. Attempting to add oversight retroactively often proves destructive, as systems may lack the necessary structural points for policy enforcement, demanding unscheduled pauses or demanding redesigns.

Kale stresses that trust and operational reliability must evolve in tandem. Once a system earns user trust, humans tend to relax their vigilance—precisely when risks of misuse or unbounded scope can surface. Therefore, authorization boundaries must be explicit and persistent. He advises giving autonomy only in proportion to reversibility: if an agent’s actions have potentially irreversible consequences—especially across domains such as finance, security, or compliance—human oversight must remain non-negotiable. Furthermore, transparency in decision-making, often referred to as “observability,” is as critical as accuracy itself. Understanding how a conclusion was reached matters as much as the conclusion.

2. Start Narrow — Then Expand Intelligently
Tolga Tarhan, CEO of Atomic Gravity, explains that his organization intentionally launches with narrowly scoped agents. Instead of attempting to construct broad, omnipotent systems, his teams focus on agents specialized in singular domains, defined by explicit goals and measurable outputs—an engineering assistant, for instance, or an operations-support agent handling complex data synthesis tasks for executives. By starting small and gradually widening functionality, teams ensure reliability while minimizing operational chaos.

3. Ensure Data Quality — The Unsung Foundation
“AI performs only as well as the data it consumes,” observes Oleg Danyliuk, CEO of Duanex. His marketing firm built an automated agent to evaluate the quality of incoming business leads. However, challenges arose in collecting relevant social data—much of which is restricted or non-scrapable. To compensate, Danyliuk’s engineers constructed elaborate workarounds to capture public information while maintaining compliance. His conclusion aligns with other experts: data integrity is the single greatest determinant of success. Tarhan echoes this view, reminding teams that models reflect their underlying data. Inadequate, biased, or incomplete datasets inevitably lead to subpar performance, regardless of the sophistication of the algorithms.

4. Begin with the Problem, Not the Technology
Technology alone does not guarantee transformation. As Tarhan explains, success demands a well-defined problem statement and measurable targets before any modeling begins. Establish metrics early, instrument every process for observability, maintain human oversight longer than comfort dictates, and build governance into the lifecycle. Rushing produces spectacular demonstrations with little practical value, whereas disciplined deployment yields enduring systems. His company treats agents as ongoing products—with roadmaps, iterative feedback mechanisms, and continuous refinement—rather than one-time technological experiments.

5. Adopt ‘AgentOps’ Methodologies
Martin Bufi, principal research director at Info-Tech Research Group, leads teams building enterprise-grade agent systems for domains such as compliance monitoring, financial analysis, and document intelligence. He credits their achievements to adopting the emerging discipline of ‘AgentOps’—an operational philosophy dedicated to supervising each phase of an agent’s lifecycle, from conception through retirement. This framework recognizes that agent success depends not only on model capability, but also on processes for training, scaling, auditing, and maintenance.

6. Keep Agents Focused and Modular
Rather than designing a single, monolithic agent to handle every function, Bufi recommends building a constellation of specialized agents. Each performs distinct roles—such as analytical evaluation, data validation, routing queries, or facilitating communication. These modular agents interact through orchestration patterns reminiscent of human teams. In some workflows, a hub-and-spoke configuration enables multi-threaded collaboration; in others, a sequential pipeline ensures that preliminary intent is confirmed and confidence established before deeper tasks proceed.

7. Manage Context and Preserve Adaptability
Sean Falconer, head of AI at Confluent, notes that even seemingly simple single-user agents face immense challenges with context management. As agents repeatedly invoke tools and iterate responses, their internal ‘context window’ rapidly saturates. Older information, while occasionally still relevant, can be deprioritized incorrectly, leading the agent astray. Developers therefore spend exceptional effort optimizing how context is pruned, summarized, and reintroduced to maintain coherence with the user’s original objective. Falconer advises designing with adaptability from the outset. Codebases, APIs, and architectures should remain flexible and insulated from proprietary dependencies to avoid being locked into specific vendors or models. Such flexibility ensures organizations can pivot rapidly as the AI landscape evolves.

In sum, deploying AI agents successfully requires a mindset shift across technical, managerial, and ethical dimensions. It is an exercise not in coding alone, but in architecture, governance, and human judgment. Those who discipline themselves to build narrow, transparent, and well-governed systems today will be best positioned to realize the transformative potential of agentic intelligence tomorrow.

Sourse: https://www.zdnet.com/article/deploying-ai-agents-7-lessons-from-trenches-experts/