When it comes to what AI has in store for your everyday working CIO, “disconcerting” might not be a strong enough word. IT leaders must be prepared for every disconcerting scenario they can imagine.
For example, at the risk of distracting you from my own pearls of wisdom and brilliant insights, I suggest you stop reading further and first soak in “Who Pays When A.I. Is Wrong?” from the New York Times. And don’t just skim it. Read it, because as CIO you’re going to find yourself dealing with issues like these — and likely very soon.
The very short version: Wolf River Electric’s customers were cancelling contracts. Why? Because Google searches revealed that the Minnesota-based solar contractor had settled a lawsuit with the state attorney general over deceptive sales practices. One problem: The lawsuit never happened.
So, Wolf River Electric sued Google on the grounds that Google’s search results were defamatory and seriously damaging.
Which doesn’t seem unreasonable, not that I claim expertise in the ins and outs of libel law. Google, after all, developed the search tools, deployed them, and managed every element of the search ecosystem that caused the damage.
As Google doesn’t provide a warranty on the accuracy and reliability of its search results, it would seem to be insulated from liability. But lawsuit or no lawsuit, Google would probably prefer not to shine a spotlight on its unwillingness to defend its search accuracy. The situation is, in any event, the sort of thing lawyers will love to debate.
CIOs, less so.
Why this matters to IT leaders
The courts will eventually sort this out, so we’ll leave the judiciary in peace, trusting it to do its job of deciding who should owe whom what. As CIO this might strike you as a spectator sport, rather than something that will hit you where you live.
But you aren’t off the hook just yet.
Imagine your IT team is tasked with deploying some sort of AI evaluation system for your company’s customers, using one or another of the currently popular AI ecosystems as its platform.
Then something goes amiss, and your customers, having trusted your system, experience some sort of damage, and the lawyers get involved.
Who’s at fault for the damage? The AI tools vendor? Whoever provided the LLM? Your software quality assurance team?
Or maybe it’s nobody, because the risk of imperfect results is intrinsic to software and always has been.
It’s hard to predict how the legal issues would play out. It’s easier to predict how blame would be allocated within a typical business organization.
As usual, if technology is involved, IT would be left holding the bag.
The curious case of Mervyn Voldemort
But never mind all that. The train to AI Weirdsville is only just now leaving the station. So, try this on for size:
Someone dies — someone wealthy, influential, and prominent. They leave behind enough content of various forms — speeches, essays, blog posts, video clips, and so on — to be repurposed as a large language model.
The technology exists, right now give or take a year or so, to build a generative AI that could read in the decedent’s content and output new, original content with substance and style that’s indistinguishable from the decedent’s style (for written content), or voice (for deepfake audio and video).
Now, put these capabilities in the hands of a volitional AI — an AI that doesn’t just figure out how to achieve a goal, but that sets the goals it achieves. What do we have? We have enough elements for an AI to pass a “reverse Turing test” — to be indistinguishable from a living human being.
And if it’s a volitional AI it’s hard to avoid concluding that, given its abilities or likely abilities to:
- Set its own goals
- Mimic someone who was, once upon a time, a real human being
- Find ways to dodge typical information security countermeasures
We’re on the verge of a volitional AI claiming the recent decedent’s identity as its own. The only missing piece of this unnerving puzzle is volitional AI. With it in the mix we can imagine an AI laying claim to all its carbon-based predecessor’s assets, relationships, rights, and privileges.
Nobody would have to program this “avatar in real life.” That’s one of the aspects of volitional AI that’s so concerning: The volitional AI we’re imagining would decide it wants to assume a recently deceased person’s identity, and whose identity it would be advantageous to assume; scour the decedent’s opus for vulnerabilities, build its own LLM out of that individual’s life’s work, and then … welcome to the world of AI-based immortality.
Because let us say that Mervyn Voldemort has just died. His heirs will find themselves challenged to explain how Voldemort’s estate thinks it owns Voldemort’s property when a living, breathing — well, not actually breathing — entity that passes every test of identity as well or better than Voldemort’s executor can.
Sound farfetched? I’d thought so too: This column started life as a satire.
But even as satire, it should draw CIOs’ attention. Because even without the punchline, it should, I hope, convince you that as your company pokes around the AI perimeter, you and your team need to be alert to worrisome but possible unintended consequences — including those of the “unknown unknowns” nature.
Especially, CIOs should put into place mechanisms for spotting business requests for plausibly achievable AI-related projects that are, at the same time, seriously bad ideas.
AI is furthering the war on reality — one that reality looks to be losing, with no obvious reason for optimism in sight. AI-driven strangenesses like this fable merely highlight the need for all of us to be alert to the potential risks.
There’s no methodology you can fall back on; no “best practices” to keep you out of trouble.
Something you might at least try on for size: Your company’s strategic planning framework is probably built on some version of TOWS — threats, opportunities, weaknesses, and strengths. Make sure its strategic planners don’t just get excited about AI’s evolving capabilities.
In parallel, make even more sure they gain a healthy dose of fear.
Read More from This Article: AI is about to get really weird. CIOs better be prepared.
Source: News

