Jiojio/Moment via Getty Images
Follow ZDNET: add us as a preferred source on Google.
**ZDNET’s essential insights:**
To integrate AI agents effectively within any organization, the concept of *context engineering* plays a pivotal role. True success in adopting agentic AI technologies hinges not merely on algorithmic capability, but on how well the AI can comprehend and operate within the multifaceted environment defined by data, metadata, workflows, processes, and institutional dynamics. Context engineering serves as the foundation ensuring that your organization’s information—structured or unstructured—is prepared for intelligent consumption by AI agents that act with purpose and autonomy.
Consider the workplace analogy: why do your current employees typically surpass even the most promising new hire, at least in the beginning? And why is a structured onboarding period so crucial before that new talent becomes truly effective? The answer lies in *institutional knowledge*—the cumulative, often tacit understanding shared by tenured employees who not only “know” the job’s mechanics but also grasp the nuanced rhythms of company culture, internal processes, proprietary tools, team personalities, and customer expectations. The new hire’s expertise may be impeccable, yet their productivity curve depends on how quickly they absorb these contextual subtleties.
In the realm of AI, this institutional wisdom manifests as *context*. AI agents resemble the talented “rockstar” recruits, equipped with considerable potential but initially detached from the organization’s inner workings. Unlike humans, these digital agents can be “onboarded” in minutes rather than months—but the fidelity of their performance scales entirely with the quality and depth of context they receive. The richer and more precise the contextual data, the more accurately and intelligently the AI behaves.
When people hear that AI performance improves with reliable or high-quality data, they often assume this refers to customer records, user analytics, or transaction logs. Yet, the necessary data scope is far more expansive. AI requires not only the factual content of databases but also descriptive layers that capture how the organization functions and why certain decisions are made—the metadata of institutional knowledge. This collective framework is what we call *context*.
**Understanding Context**
To harness AI’s capabilities responsibly and efficiently, one must first grasp the composition and categories of context—its origin, structure, and degree of organization. Context can be drawn from structured sources like metadata tables and enterprise applications or from unstructured material such as meeting transcripts, HR documents, or brand guideline PDFs. All these sources shape how the AI system perceives its operational world.
You may have heard discussions about AI models featuring “large context windows.” Systems like Anthropic’s Claude, with a one-million-token window, or OpenAI’s ChatGPT 5.2, with around four hundred thousand tokens, can indeed process vast text volumes. However, these capacities remain insufficient for encompassing an entire enterprise’s contextual complexity. For instance, a single Salesforce org configuration involving twenty intricate Apex classes might already exceed two hundred fifty thousand tokens. This evidences the necessity of *context engineering*: selecting, refining, and constructing only the most pertinent contextual segments corresponding to the AI’s assigned function.
**Context Engineering in Practice**
Within modern corporations, context appears in both structured and unstructured forms. Human employees intuitively bridge missing links in vague documents using experience and judgment—abilities machines lack. While AI agents can now parse unstructured input, they remain susceptible to confusion when encountering inconsistency or ambiguity, which in turn produces “hallucinations” or erroneous reasoning. Thus, context engineering demands curating and structuring data so that it becomes comprehensive, coherent, and AI-readable.
The context provided to an AI agent should be complete, relevant, and appropriately scoped to its designated tasks. Overloading the model with excessive data not only wastes compute resources but may also degrade performance due to context-window limitations. The effective strategy involves analyzing end-to-end workflows to pinpoint precisely which data sources, systems, and documents define the agent’s role. Parsing these repositories accurately often entails drawing connections across multiple platforms—Salesforce’s ecosystem, for example, includes Data360, Informatica, MuleSoft, and Tableau—each capturing distinct facets of organizational context.
**Putting Context into Context**
To deliver task-specific intelligence, AI agents must access well-defined contextual data that integrates documented business processes with the underlying application configurations. These configurations embody dependencies and relationships—sometimes intricate combinations of metadata referencing other metadata—that explain not only *what* happens but also *why* and *how* it happens. Process diagrams then introduce another dimension, depicting where human involvement intersects digital operations. Their quality varies across enterprises: front-office departments often maintain fragmentary documentation, whereas back-office functions in regulated sectors tend to have precise process maps. To exploit AI’s full potential, enterprises must modernize, rationalize, and continuously refine these processes—an echo of the process reengineering wave of the 1990s, albeit now more data-intensive.
However, the translation of technical architecture into AI-readable context remains challenging. Legacy systems burdened by technical debt obscure dependencies and distort metadata clarity. Therefore, organizations must apply advanced analytical workflows—often employing chains of interlinked micro-agents—to interpret, structure, and validate their metadata effectively.
**Evaluating Content Readiness for AI**
Before AI deployment, each content category demands systematic examination through five essential questions:
1. Does the information exist, who is its custodian, and what drives their engagement with this initiative?
2. Is the content current and governed by an ongoing maintenance mechanism?
3. Has it been composed or adapted to minimize ambiguity for AI interpretation?
4. Where should it reside to ensure accessibility to AI while upholding security and compliance standards?
5. How should it be structured, tagged, and balanced between detail granularity and token efficiency?
Among the multiple content categories, three merit deeper exploration: *company culture*, *business operations*, and *application configuration*.
**Company Culture**
Cultural knowledge constitutes both codified materials shared during onboarding and intangible insights accumulated organically. AI agents, unlike new human hires, require an immediate infusion of this cultural intelligence. Relevant artifacts include brand books, annual reports, policy documents, marketing visuals, and even the stylistic tone of customer communications. These elements, distributed among departments with differing priorities, must be harmonized so that AI encounters a unified representation of organizational ethos.
Such documents are often updated upon rebranding or leadership transitions, so their currency must be assessed carefully. Moreover, language crafted for human audiences may contain implicit assumptions that confuse AI, necessitating additional explanatory annotations. Since much of this content—videos, design files, reports—remains unstructured, its integration typically involves transcription and indexing through systems like Data360. Security, too, is paramount: the amalgamation of otherwise harmless datasets can inadvertently expose sensitive intellectual property once correlated. Designing granular access controls mitigates this risk. Structuring and tagging should strike a balance between contextual completeness and efficient data consumption.
**Business Operations and Processes**
Process documentation outlines the procedural scaffolding on which AI agents rely to perform outcomes and collaborate with other systems. Although most organizations maintain process diagrams, these are frequently outdated or inconsistent. The remedy lies in focusing documentation efforts precisely where the AI will operate. Modern AI tools can expedite initial drafts by translating notes or metadata into process visualizations, refining them in consultation with business leaders.
The process improvement cycle itself must become a living mechanism, ensuring that AI agents referencing these maps act in alignment with current realities. Clear, updated, and quality-controlled process documentation is the only language AI truly understands. To make this content machine-accessible, unstructured diagrams can be transformed into structured formats—such as JSON—maximizing interpretability.
**Application Configuration**
Metadata captures the data models, logical flows, and access parameters defining each enterprise application. When AI agents interact across multiple applications, architectural and integration diagrams provide the additional connective tissue describing inter-system dependencies. These metadata repositories, such as those managed by Informatica or Elements.cloud, become essential for ensuring that AI operates within accurate system boundaries.
Since metadata is inherently structured and self-descriptive, it can be stored in virtually any database format. However, because the volume is immense, careful selection is necessary to avoid overwhelming token constraints. Context engineering links this technical layer back to operational objectives—ensuring each AI action corresponds to a well-defined process and dataset lineage.
**The Essence of Context in Communication**
A familiar saying asserts that words represent only seven percent of communication, leaving tone and visual cues responsible for the remaining ninety-three percent. If we translate this to AI, the commands and prompts we provide constitute merely the “words.” Without supplementary context—such as relationships, priorities, timing, sentiment, and intent—AI’s interpretations will inevitably suffer from misalignment or hallucination. Supplying contextual depth is equivalent to providing tone of voice, body language, and emotional nuance for a machine.
**Implementing Context Engineering: A Strategic Framework**
Though the term may seem new, context engineering formalizes processes humans already perform instinctively within organizations. By structuring what employees learn over years into machine-readable form, companies can elevate AI systems from generic chatbots to informed digital collaborators. Yet the success of this transformation depends on meticulous data curation, strict governance, and a deep appreciation for institutional subtleties.
To orchestrate this transformation effectively, consider three key actions:
1. **Document AI scope comprehensively** – define each agent’s complete process responsibilities and intended outcomes.
2. **Identify and calibrate key contextual data** – determine what information the AI must understand to perform accurately and measure its quality and completeness.
3. **Format and manage your context** – organize and store the contextual information within robust platforms capable of curating and delivering it seamlessly to AI agents.
In essence, context engineering bridges the human and the artificial: it distills lived organizational experience into structured knowledge that machines can process instantly. Those who master it will not only streamline AI adoption but fundamentally amplify the cognitive capability of their entire enterprise.
*Co-authored by Ian Gotts, Senior Research Fellow at Keenan Vision, co-founder of Elements.Cloud, ten-time author, technology advisor, speaker, and investor.*
Sourse: https://www.zdnet.com/article/context-engineering-key-to-onboarding-agentic-ai-success/