Consider the Turing test. Its challenge? Ask some average humans to tell whether they’re interacting with a machine or another human.
The fact of the matter is, generative AI passed the Turing test a few years ago.
I suggested as much to acquaintances who are knowledgeable in the ways of artificial intelligence. Many gave me the old eyeball roll in response. In pitying tones, they let me know I’m just not sophisticated enough to recognize that generative AI didn’t pass Turing’s challenge at all. Why not? I asked. Because the way generative AI works isn’t the same as how human intelligence works, they explained.
Now I could argue with my more AI-sophisticated colleagues but where would the fun be in that? Instead, I’m willing to ignore what “Imitation Game” means. If generative AI doesn’t pass the test, what we need isn’t better AI.
It’s a better test.
What makes AI agentic
Which brings us to the New, Improved, AI Imitation Challenge (NIAIIC).
The NIAIIC still challenges human evaluators to determine whether they’re dealing with a machine or a human. But NIAIIC’s challenge is no longer about conversations.
It’s about something more useful. Namely, dusting. I will personally pay a buck and a half to the first AI team able to deploy a dusting robot — one that can determine which surfaces in an average tester’s home are dusty, and can remove the dust on all of them without breaking or damaging anything along the way.
Clearly, the task to be mastered is one a human could handle without needing detailed instructions (aka “programming”). Patience? Yes, dusting needs quite a bit of that. But instructions? No.
It’s a task with the sorts of benefits claimed for AI by its most enthusiastic proponents: It takes over annoying, boring, and repetitive work from humans, freeing them up for more satisfying responsibilities.
(Yes, I freely admit that I’m projecting my own predilections. If you, unlike me, love to dust and can’t get enough of it … come on over! I’ll even make espresso for you!)
How does NIAIIC fit into the popular AI classification frameworks? It belongs to the class of technologies called “agentic AI” — who comes up with these names? Agentic AI is AI that figures out how to accomplish defined goals on its own. It’s what self-driving vehicles do when they do what they’re supposed to do — pass the “touring test” (sorry).
It’s also what makes agentic AI interesting when compared to earlier forms of AI — those that depended on human experts encoding their skills into a collection of if/then rules, which are alternately known as “expert systems” and “AI that reliably works.”
What’s worrisome is how little distance separates agentic AI from the Worst AI Idea Yet, namely, volitional AI.
With agentic AI, humans define the goals, while the AI figures out how to achieve them. With volitional AI, the AI decides which goals it should try to achieve, then becomes agentic to achieve them.
Once upon a time I didn’t worry much about volitional AI turning into Skynet, on the grounds that, “Except for electricity and semiconductors, it’s doubtful we and a volitional AI would find ourselves competing for resources intensely enough for the killer robot scenario to become a problem for us.”
It’s time to rethink this conclusion. Do some Googling and you’ll discover that some AI chips aren’t even being brought online because there isn’t enough juice to power them.
It takes little imagination to envision a dystopian scenario in which volitional AIs compete with us humans to grab all the electrical generation they can get their virtual paws on. Their needs and ours will overlap, potentially more quickly than we’re able to even define the threat, let alone respond to it.
The tipping point
Speaking more broadly, anyone expending even a tiny amount of carbon-based brainpower regarding the risks of volitional AI will inevitably reach the same conclusion Microsoft Copilot does. I asked Copilot what the biggest risks of volitional AI are. It concluded that:
The biggest risks of volitional AI — AI systems that act with self-directed goals or autonomy — include existential threats, misuse in weaponization, erosion of human control, and amplification of bias and misinformation. These dangers stem from giving AI systems agency beyond narrow task execution, which could destabilize social, economic, and security structures if not carefully governed.
But it’s okay so long as we stay on the right side of the line that separates agentic from volitional AI, isn’t it?
In a word, “no.”
When an agentic AI figures out how it can go about achieving a goal, what it must do is break down the goal assigned to it into smaller goal chunks, and then to break down these chunks into yet smaller chunks.
An agentic AI, that is, ends up setting goals for itself because that’s how planning works. But once it starts to set goals for itself, it becomes volitional by definition.
Which gets us to AI’s IT risk management conundrum.
Traditional risk management identifies bad things that might happen, and crafts contingency plans that explain what the organization should do should the bad thing actually happen.
We can only wish that this framework would be sufficient when we poke and prod an AI implementation.
Agentic AI, and even more so volitional AI, stands this on its head, because when it comes to it, the biggest risk of volitional AI isn’t that an unplanned bad thing has happened. It’s that the AI does what it’s supposed to do.
Volitional AI is, that is, dangerous. Agentic AI might not be as inherently risky, but it’s more than risky enough.
Sad to say, we humans are probably too shortsighted to bother mitigating agentic and volitional AI’s clear and present risks, even risks that could herald the end of human-dominated society.
The likely scenario? We’ll all collectively ignore the risks. Me too. I want my dusting robot and I want it now, the risks to human society be damned.
See also:
Read More from This Article: What agentic AI really means for IT risk management
Source: News

