As AI continues to evolve and mature, organizations are beginning to deploy AI agents, which behave very differently from other forms of AI. Unlike generative or traditional AI, which act in response to a human prompt or request, AI agents independently perform complex tasks that require multi-step strategies. To accomplish their goals, agents must collect data from myriad sources and interact with internal and external systems.
Machine identities far outnumber humans in enterprise networks, and machine identity management becomes very complex, very quickly. Unfortunately, many of the permissions given to AI agents are far too broad. If agents are compromised, attackers can use them to move laterally across the network, escalate their privileges to steal data, deploy malware and hijack critical internal systems.
When employees find they can’t do their jobs because they don’t have broad enough permissions, they complain, and it gets fixed. Machines, on the other hand, don’t complain. They just break, which creates issues that IT must investigate. Every IT department is overtaxed, so administrators are likely to err on the side of giving the AI agent overly broad privileges. This may make managing AI agents easier in the short term, but it increases the long-term security risk.
Let’s say IT has deployed an AI agent that acts as a chatbot to help sales representatives find information quickly about prospects and customers. This agent will need access to CRM data, but an admin might mistakenly give it broad read-write access to many enterprise databases.
“With these privileges, if bad actors compromise the agent, they could delete records, drop entire databases, take over applications and execute a serious data breach,” says Phil Calvin, chief product officer at Delinea.
The ease of spinning AI agents creates other issues: primarily, shadow AI and agent sprawl. It has become possible, even simple, for non-technical employees to download an agent from open-source sites, spin it up, and connect to data sources — all without any input or awareness from IT.
To properly manage AI agent identities, IT needs to continuously discover all agents in the environment, a process that should be automated and continuous, so IT can become aware of new agents as they appear. Next, IT needs a unified view of all machine identities and their permissions for efficient management.
Agent permissions should default to read-only. Those agents that need the ability to create, update or delete data should each be handled individually and with great care. Next, adhere to the principle of least privilege. If an agent is deployed to provide employees with easier access to information in the knowledge bases, then there’s no reason it should have read access to customer information in the CRM. Restrict access only to the data sources the agent needs to accomplish its tasks.
Delinea has built a cloud-native identity security platform that runs on a global scale to continuously discover, provision, and govern all machine and human identities, including AI agents. IT gains a coherent, comprehensive view of all identities — even those not under IT’s direct control —via a single pane of glass.
“As an industry, we tend overcomplicate identity management for our customers,” Calvin said. “At its most basic, an AI agent is just an account, and you need to understand the account sprawl and permissions. We give the customer an easy-to-comprehend view into all of that, which exponentially simplifies management.”
Read More from This Article: Beyond human identities: Cybersecurity’s blind spot in the age of AI agents
Source: News