The average person makes about 35,000 decisions every day.
Those aren’t just simple choices. It’s a chain of decisions that branches in different directions along the way. But circumstances always change, so a person’s decision at a particular branch point might vary depending on any number of factors. You always walk to work, for instance, but a morning thunderstorm might change that routine.
The people who are programming agentic AI want it to work through the same type of autonomous decision-making.
Much of the AI work prior to agentic focused on large language models with a goal to give prompts to get knowledge out of the unstructured data. So it’s a question-and-answer process.
Agentic AI goes beyond that. You can give it a task that might involve a complex set of steps that can change each time.
For example, in the digital identity field, a scientist could get a batch of data and a task to show verification results. It sounds easy, but the steps below the surface are complex and always different based on the data set, whether that involves the person’s age, location, or some other factor.
Could agentic AI accomplish that task? Could it work through complex, dynamic branch points, make autonomous decisions and act on them? That requires stringing logic together across thousands of decisions. I’ve spent more than 25 years working with machine learning and automation technology, and agentic AI is clearly a difficult problem to solve.
A potential game-changer for and against fraud
The more complicated a system is, the more vulnerable it is to attack. Agentic AI worries me on that front because fraudsters can use the technology to exploit weaknesses in security.
Document verification, for instance, might seem straightforward, but it involves multiple steps, including image capture and data collection, behind the scenes. That creates a large surface area for fraudsters to probe with agentic AI, and they can do it far faster with that technology.
It gets kind of scary.
But there are defenses. One of the best is a penetration test that checks for ways someone could access a network. Organizations could use agentic AI to try to defeat themselves, much like a red team exercise. The technology could be used as a monitoring tool that watches multiple parameters for anything abnormal. It’s also possible to train agentic AI to recognize itself and determine that responses during a verification are likely coming from a computer.
The convergence of use case, compliance, and fear of the unknown
If we told agentic AI to onboard a customer or a business, can it do it in a way that meets compliance requirements?
Business verification might sound like an ideal use case for the technology. Business sizes vary, and it’s difficult to verify across that spectrum. Beyond that, there are ultimate beneficial owners for those businesses that require identity document verification.
Agentic AI could manage those separate steps and logic chains. It could take specific actions depending on the size of a business.
Digital verification, though, operates with a strict set of rules. The agentic AI could onboard a business, but it might be hard-pressed to do it in a way that’s compliant because it’s not the same every time. It would be difficult to explain what it did or what it’s going to do.
That might scare some people above and beyond the possibility of onboarding the wrong person or business. There’s the problem of explaining what it did in a given circumstance and getting people comfortable with the technology. That’s definitely a precursor to the science fiction stories about losing control of our own creation. It’s the fear of the unknown.
Regulators are going to have to decide if they will allow agentic AI in digital verification. In our industry, that might be a greater constraint than having the technology to do the task.
A practical approach to new technology
Agentic AI will need guardrails and human oversight in the beginning. At least in its early days, the technology will be a programmed system. It can run in parallel to a person executing the same task to see if they arrive at the same conclusion, even if they follow different decision branches to get there.
Oversight and testing can diminish concerns around agentic AI, but this isn’t the first time technology has created a fear of the unknown. The internet did the same thing. It introduced distributed stores of knowledge accessible by anyone. Before ecommerce, people didn’t trust buying things on the internet, and they wouldn’t put their credit card information online. It’s a different world now.
Agentic AI does have the potential to “think” on its own, and it’s prudent to proceed with caution. But, if we make the right decisions along the way, we can turn autonomous technology into a tremendous tool.
Read More from This Article: The dawn of agentic AI: Are we ready for autonomous technology?
Source: News