Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

Beyond automation: Realizing the full potential of agentic AI in the enterprise

This article was co-authored by Shail Khiyara and Sandeep Mehta 

An agent, derived from the Latin word agere or agens, implies something capable of producing an effect when authorized by another. In software, agents commonly refer to programs acting on behalf of a user or another computer program. The concept derives from a model of concurrent computation in the 1970s. With the advent of artificial intelligence, agents also exhibit additional properties such as basic reasoning, autonomy and collaboration. 

The emergence of software-based automation over the past few decades has occurred alongside advancements in robotics and artificial intelligence. Enterprises have progressively adopted new waves of automation paradigms – from simple scripts and bots to robotic process automation (RPA) and cloud-based automation platforms.  

Today, agentic AI — software agents that exhibit autonomy, adaptiveness and reasoning — represents the frontier. Yet real-world adoption remains uneven, as we will discuss later. While certain enterprises pilot small-scale “AI assistant” prototypes, others are grappling with how to orchestrate multiple agents across diverse and complex business processes. The recent emergence of agentic protocol frameworks like model context protocol (MCP) from Anthropic and agent to agent (A2A) from Google are trying to address interoperability and integration functionality, which adds another layer to the AI stack. 

This paper explores the emergence of agentic AI in the enterprise through three key themes: 

  1. Core properties of a true agentic system. 
  2. Challenges and solutions for designing semi-autonomous yet robust AI agents. 
  3. Practical pathways for integrating agentic AI into existing enterprise environments, particularly those constrained by compliance or legacy systems. 

We begin by surveying the essential features of agentic AI, including the original set of characteristics of intelligent agents proposed by researchers at Carnegie Mellon University. We then explore design and orchestration strategies, discuss human oversight and governance, and outline practical examples to illustrate deployment and scaling. 

Defining agentic AI 

Agentic AI is not just about using prompts or simple chatbots. In the current era of AI, with the availability of large language models (LLMs) and, more recently, large reasoning models (LRMs), AI agents have taken on wider implications and applications. Unlike traditional software that executes predefined instructions, agentic systems make adaptive, autonomous decisions grounded in reasoning.  They are not limited to being software entities that act to fulfill a specified goal. 

The key difference between agentic AI and earlier solutions is their ability for adaptive, autonomous decision-making on top of basic reasoning and autonomy rather than strictly rule-based or human-directed actions typical of pre-AI agents. 

8 pillars of agentic AI  

Researchers at Carnegie Mellon, had proposed a set desired characteristics of agents because they enhance their ability to effectively navigate complex environments and aid decision making. 

  1. Task-driven. The agent orients itself around clear, human-directed objectives — e.g., “optimize inventory levels” or “automatically reconcile monthly financial statements.” 
  2. Network-enabled. Agents interact with databases, APIs and other agents, forming part of a distributed system. 
  3. Semi-autonomous. While agents can initiate actions on their own, they typically require human-defined constraints and human supervision approval, especially in regulated or high-stakes workflows.  
  4. Persistent. The agent endures over time, rather than existing only for a single request and action. This persistence underpins the ability to monitor changes and continue functioning without repeated intervention. 
  5. Reliable. The agent should perform dependably to the user’s needs, thus engendering trust in its performance.  
  6. Active. Beyond passive query handling, an agentic system should be able to monitor its environment and proactively initiate actions/alerts or recommend next actions, even when no human prompt is received. 
  7. Collaborative. Agents are designed to work collectively — either with other agents or with humans — coordinating activities such as data sharing, conflict resolution, process handoffs and consensus decision-making. Agents should continually incorporate knowledge from their environment. 
  8. Adaptive and anticipatory. Agents can refine their strategies or models, using new data or sources to improve accuracy, efficiency or other user goals, including ones that may not have been originally stated.  

Enterprise design and deployment 

Adopting agentic AI in the enterprise requires addressing a set of architectural and operational challenges from the outset. While agents promise transformative value, they also introduce complexity and risk that must be mitigated by thoughtful design. 

We highlight both the risk and complexity of agentic systems with recommendations. 

As is common in software abstractions, components bring complexity, be it microservices in the cloud or “too many bots” in RPA. AI agents introduce even more complex layers of interaction and coordination. An enterprise may have a wide variety of user-facing, general-purpose purpose or specialized agents versus specialized agents, even leading up to the next category of developer-facing agents as a service. A version of this capability has emerged in agentic protocol frameworks like MCP and A2A. These frameworks are not dissimilar to middleware in that they provide a means of standardized communication and integration and reusable components in workflows.

Enterprise software has benefited from evolving layers of software abstraction in distributed computing over the last few decades. However, the frameworks also present challenges of integration and compatibility across legacy enterprise systems, as seen with middleware (i.e., a complexity tradeoff). That complexity, accompanied by risk and security challenges, requires us to be mindful and design guardrails along many dimensions.  

Agent infrastructure 

The surge of interest in the deployment of agents needs a rethinking of how agent infrastructure is built. Early in the present AI era, agentic applications were more DIY, but the demands of enterprise software require layers of infrastructure to build and deploy agentic workflows reliably and at scale. A typical agent infrastructure should be a layer cake comprised of:  

  • Foundational platforms. Foundation models, multi-agent frameworks and observability layers. 
  • Orchestration layers. Systems for routing, agent coordination, state persistence and task management. 
  • Data services. Specific to model memory, storage, data extraction, ETL, etc. 
  • Tooling. Browser automation, UI integration, service discovery security modules. 
  • Use of Agentic protocols. Naming, discovery, standards, etc., in inter-agent communication and interactions have to be well defined.  

We will briefly touch upon orchestration and then focus on risk and security guardrails that are needed. 

Agent orchestration 

Orchestration is essential for managing coordination among numerous agents. Common architectures include: 

  • Hierarchical (hub-and-spoke). Centralized control by a “master” or supervisory agent. 
  • Decentralized (peer-to-peer). Agents discover and coordinate through shared protocols. 
  • Hybrid models. Most enterprises require a blend of both approaches.  

Risk and security 

Any AI augmented automation will be subject to risks of data misinterpretation, bias, flawed logic, overconfident speculation (collectively referred to as hallucination) along with temporal and context errors and malicious attempts such as model or prompt poisoning and traditional cyberattacks. 

Further, in highly regulated environments (e.g., HIPAA in healthcare, SOX in finance), companies demand traceability with audit trails. AI agents that autonomously gather and act on data must leave a digital footprint for each decision: 

  • Traceable decision logs. If an agent rejects an invoice or flags a suspicious claim, logs should clarify why it took that action (transparency in their reasoning logic) 
  • Controlled delegation that maintains human control over high-stakes decisions while automating low-risk activities to agents. 
  • Circuit breakers. To prevent agents from making critical decisions without verification. 
  • Continuous monitoring and robust error handling, including making compensating adjustments when detected or through human oversight 
  • Data governance. Access control, anonymization strategies and safe harbor approaches help ensure compliance even when agents are cross-referencing multiple data sources. 

Agent supervision — Human in the loop 

While greater autonomy can reduce manual work, it also raises questions of safety, security, reliability and ethical oversight. If humans are only asked to intervene in exceptional cases, they may lack context when an alert finally arrives and in critical infrastructure cases context switching is fraught with risk of last minute human intervention. 

Strategies to address this issue include: 

  • Periodic “checkpoints” and inserted friction. Agents pause at specific milestones for human review, especially in risk-sensitive domains like finance or healthcare with vulnerable populations. Additional verification steps can be inserted in those high-stakes use cases.  
  • Maintaining operator engagement. A state of flow is essential to human engagement. Just dashboards or notifications that keep operators aware of ongoing tasks, even when no intervention is requested, may not be sufficient. They need to be augmented with training and gaming scenarios that maintain skills, interest and focus in machine-human hybrid workflows. 
  • Training and cultural awareness. Protocols are needed for regular training of human operators to keep their AI awareness and skills up to date, incorporate their feedback into error handling and clear awareness of the systems limitations so the operators don’t over-surrender their agency. 

Use cases and early lessons learned 

These examples describe some cases where agentic AI can automate workflows and augment human effort.  

  1. Power grid load balancing 
  • Scenario: Agents monitor real-time electricity demand, renewable energy inputs and grid stability across multiple states 
  • Outcome: Improvement in grid efficiency, reduction in brownouts and savings through automated load distribution and predictive maintenance. 
  1. Document reconciliation and processing 
  • Scenario: The agent ingests data from multiple ERP systems, proactively identifying mismatches and can complete forms and correct errors. Humans only intervene in cases or errors not fixable by RAG (Retrieval Augmented Generation) review. 
  • Outcome: Organizations typically see faster closure rates and fewer manual errors, though the agent must integrate with multiple data platforms. 
  1. Customer support and ticket resolution 
  • Scenario: Agents triage inbound queries, parse them against existing and improving knowledge bases and route complex cases to specialized human reps. Over time, they learn from resolved tickets to improve their handoff accuracy. 
  • Outcome: Faster response times, better resolution rates, customer satisfaction metrics — provided a robust fallback exists for uncertain queries. 
  1. Operational monitoring in the supply chain 
  • Scenario: Agents monitor shipment data, predict potential disruptions (weather events, supplier delays) and notify managers proactively. 
  • Outcome: Reduced downtime, more agile rescheduling, with humans in the loop for final decisions on re-routing or supplier switches. 

Key observations 

In all the above examples, the need for guardrails, supervision and human judgement is clear. The risk introduced by orchestration gaps can produce conflicting or erroneous results. Even a small error rate in a model can compound rapidly over multiple steps with multiple agents in a complex orchestrated process, as Demis Hassibis of Google Deep recently reiterated. The need for humans in the loop is essential, but without understanding the cognitive load, we put humans under conditions that are prone to make the human-AI hybrid error-prone. Finally, cultural acceptance is key to any automation and Agentic AI is no different. Without employee buy-in and addressing the fear of job loss, the risk of organizational rejection can be significant. 

Challenges and future directions 

While small proofs of concept look promising, truly enterprise-wide deployments demand robust infrastructure, standardized toolkits and extensive user training. It is important to distinguish the non-deterministic nature of agents that can take different paths versus traditional rule-based software. Correcting and improving agentic behavior requires many iterations with improved data. In addition, agent infrastructure needs to also incorporate the software practices of lifecycle management, versioning, built-in learning and clearly built governance and compliance rules (especially in applications for regulated industries. 

Current large language (and reasoning) models excel at pattern matching but can struggle with logic or domain constraints. A neuro-symbolic hybrid — where a symbolic reasoning module enforces rules or knowledge graphs — could improve agent reliability while still leveraging the adaptive strengths of neural models. LLM/LRM-based agentic systems will perform better with the evolution of true reasoning that is currently lacking. 

Already, we see calls for oversight committees and new guidelines around “trustworthy AI.” In industries like healthcare or finance, organizations must expect more stringent external audits, requiring: 

  • Model transparency, traceability and explainability. Why did the agent produce this recommendation? Can it document and explain the decision process and be subject to control testing in regulated use cases?
  • Compliance-driven fail-safes. Immediate human review if anomalies exceed a pre-defined threshold. 
  • Conducting audits. Defining controls, continuous testing  
  • Liability frameworks for agents where there is a formal assignment of responsibility and liability, including potential insurance against agents.  
  • Code of ethics and compliance for agent frameworks. 
  • Quality assurance standards and testing specific to agent frameworks, including scenario-based stress testing akin to what banks have to conduct in regulatory exams. 

Cultivating a culture of readiness

Agentic AI represents a transformative leap in enterprise automation, offering capabilities that extend beyond traditional rule-based systems like RPA. Between simple rule-driven use cases and true autonomic intelligent applications, there are a myriad of applications in business. However, successful deployment hinges on more than just sophisticated AI models and agent frameworks. Organizations must cultivate a culture of readiness, characterized by openness to change, robust processes and a balanced approach to risk management. By integrating agentic AI thoughtfully, businesses can unlock new efficiencies and innovation while maintaining ethical and responsible governance. Recommendations include paying attention to and incorporating: 

  • Robust orchestration. Avoid “agent sprawl” by establishing a comprehensive orchestration layer to manage interactions. Implement “supervisor” agents to ensure seamless coordination and maintain human oversight to mitigate risks associated with autonomous decision-making.
  • Adaptive security and data governance. As agents gain more autonomy over sensitive data, stringent measures such as robust audit trails, security access controls and ongoing compliance checks are essential to ensure regulatory adherence.
  • Human oversight and engagement. Humans must remain an integral part of the process — not just for ethical/regulatory reasons, but to prevent skill atrophy and ensure accountability.
  • Iterative scaling. Start with constrained pilots, carefully measure outcomes and expand. This approach mitigates risk and fosters stakeholder buy-in.
  • Ethics and compliance frameworks with clear guidelines.
  • Industry standards. Not just for data governance and security, but also in agentic capabilities and protocols for integration. 

Ultimately, success with agentic AI depends on a balanced approach incorporating innovation and agility with agentic AI along with organizational readiness — culture, processes, risk tolerance. By honoring these foundational requirements, businesses can tap into the transformational potential of this new era of automation that is both powerful and responsibly governed.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?


Read More from This Article: Beyond automation: Realizing the full potential of agentic AI in the enterprise
Source: News

Category: NewsMay 19, 2025
Tags: art

Post navigation

PreviousPrevious post:Así es el orquestador de agentes de Adobe: personalización a escala para impulsar la creatividad NextNext post:The road to S/4HANA: How CIOs are managing SAP ECC’s end of support

Related posts

Barriers to running AI in the cloud – and what to do about them
May 20, 2025
IoT security: Challenges and best practices for a hyperconnected world
May 20, 2025
SAP goes all-in on agentic AI at SAP Sapphire
May 20, 2025
SAP revamps its cloud ERP application packages
May 20, 2025
5 questions defining the CIO agenda today
May 20, 2025
What is SCOR? A model to improve supply chain management
May 20, 2025
Recent Posts
  • Barriers to running AI in the cloud – and what to do about them
  • IoT security: Challenges and best practices for a hyperconnected world
  • SAP goes all-in on agentic AI at SAP Sapphire
  • SAP revamps its cloud ERP application packages
  • 5 questions defining the CIO agenda today
Recent Comments
    Archives
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.