Organizations deploying AI have focused heavily on prompt engineering as a method for generating the best results, but an emerging technique called context engineering will make AI tools more accurate and useful, experts say.
Adding context to AI models has been an important piece of the puzzle since the start of the modern AI revolution about three years ago. But AI developer Anthropic kicked off a debate about context engineering with a Sept. 29 blog post about why the methodology is critical when rolling out AI agents, and some AI experts see it as the next big competitive advantage as organizations deploy advanced AIs.
Context can be thought of as the set of tokens used with large language models (LLMs), Anthropic’s engineering team writes.
“The engineering problem at hand is optimizing the utility of those tokens against the inherent constraints of LLMs in order to consistently achieve a desired outcome,” the blog post says. “Effectively wrangling LLMs often requires thinking in context — in other words: considering the holistic state available to the LLM at any given time and what potential behaviors that state might yield.”
Move over prompt engineering
The practice of prompt engineering, or writing effective prompts, is still needed, with more than 15,500 such jobs listed on Indeed.com as of Oct. 24. But adding context to LLMs, agents, and other AI tools will become just as important as organizations look for more accurate or specialized results from their deployments, AI experts say.
“In the early days of engineering with LLMs, prompting was the biggest component of AI engineering work, as the majority of use cases outside of everyday chat interactions required prompts optimized for one-shot classification or text generation tasks,” Anthropic’s blog post says. “However, as we move towards engineering more capable agents that operate over multiple turns of inference and longer time horizons, we need strategies for managing the entire context state.”
Context can come in the form of documents, memory files, comprehensive instructions, domain knowledge, message histories, and other forms of data.
It isn’t a new practice for developers of AI models to ingest various sources of information to train their tools to provide the best outputs, notes Neeraj Abhyankar, vice president of data and AI at R Systems, a digital product engineering firm. He defines the recently coined term context engineering as a strategic capability that shapes how AI systems interact with the broader enterprise.
“It’s less about infrastructure and more about how data, governance, and business logic come together to enable intelligent, reliable, and scalable AI behavior,” he says.
Context engineering will be critical for autonomous agents trusted to perform complex tasks on an organization’s behalf without errors, he adds.
Context engineering will also help small language models become domain experts in industries, such as healthcare and finance, that have low tolerance for mistakes, and it will help train AI models tasked with eliminating tech debt on an organization’s specific IT infrastructure challenges, Abhyankar says.
“What we’re witnessing is a fundamental evolution in how enterprises design and deploy AI systems,” he adds. “In the early stages of experimentation, prompt engineering was sufficient to guide model behavior and tone. As organizations transition from pilots to production-scale deployments, they’re finding that prompt engineering cannot deliver the accuracy, memory, or governance required in complex environments on its own.”
Context: A foundational element for AI
Abhyankar predicts that in the next 12 to 18 months, context engineering will move from being an innovation differentiator to a foundational element of enterprise AI infrastructure.
Context engineering is an “architectural shift” in how AI systems are built, adds Louis Landry, CTO at data analytics firm Teradata.
“Early generative AI was stateless, handling isolated interactions where prompt engineering was sufficient,” he says. “However, autonomous agents are fundamentally different. They persist across multiple interactions, make sequential decisions, and operate with varying levels of human oversight.”
He suggests that AI users are moving away from the approach of, “How do I ask this AI a question?” to “How do I build systems that continuously supply agents with the right operational context?”
“The shift is toward context-aware agent architectures, especially as we move from simple task-based agents to autonomous agentic systems that make decisions, chain together complex workflows, and operate independently,” Landry adds.
The rise of context engineering won’t bring an end to prompt engineering, however, says Adnan Masood, chief AI architect at digital transformation firm UST.
“Prompts set intent; context supplies situational awareness,” he says. “In real enterprise apps, the ROI comes from engineering the information, memory, and tools that enter the model’s tiny attention budget — every single step.”
While good prompt engineering sets intent with clear instructions and tone, it’s become table stakes for successful AI deployments, Masood says. On top of that intent, context engineering creates situational awareness.
A shift toward context engineering is coming as AI vendors and users move from creating clever prompts to repeatable context pipelines, he adds. Accurate and predictable AI results enable the technology to scale beyond a dependence on a well-crafted prompt, he adds.
“The bottleneck isn’t just model size; it’s how well you assemble, govern, and refresh context under real constraints,” Masood says. “In practice, that shift is showing up as better answer attribution, lower drift across long sessions, and safer behavior through provenance-controlled inputs.”
IT leaders should act now to treat context as infrastructure, not a prompt file. They should standardize a context pipeline — including curation, processing, and data management — and they should focus on creating privacy controls and audit logs to show what tokens shaped each AI answer.
“Think beyond prompts and ask your teams to actually think about curating these retrievals and memories that will improve your models and fine-tune them,” he adds. “Invest in scaffolding.”
Operationalizing context for AI
IT leaders should treat context engineering as a knowledge infrastructure problem, not just an AI problem, adds Teradata’s Landry.
“Context engineering requires integration across your data architecture, knowledge management systems, and operational platforms,” he adds. “This isn’t something your AI team solves alone. It requires collaboration between data engineering, enterprise architecture, security, and those who understand your processes and strategy.”
IT leaders should identify processes where they have clean data, clear business rules, and measurable outcomes, then build their context engineering practices on top, he advises.
“Technology leaders who treat context engineering as a one-off AI project will struggle,” Landry adds. “Those who recognize it as a foundational infrastructure discipline, like API management or data governance, will build AI systems that scale and earn organizational trust.”
Read More from This Article: Context engineering: Improving AI by moving beyond the prompt
Source: News

