Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

A CIO primer on addressing perceived AI risks

Ask your average schmo what the biggest risks of artificial intelligence are, and their answers will likely include: (1) AI will make us humans obsolete; (2) Skynet will become real, making us humans extinct; and maybe (3) deepfake authoring tools will be used by bad people to do bad things.

Ask your average CEO what the biggest risks of artificial intelligence are and they’ll more likely talk about missed opportunities — of AI-based business capabilities competitors are able to deploy sooner than they can.

As CIO you need to anticipate, not only actual AI risks, but perceived ones as well. Here’s how to go about it.

Risks perceived by an average schmo

1. Will AI make humans obsolete? Answer: This isn’t a risk; it’s a choice. Personal computers, then the internet, and then smartphones all led to opportunities for computer-augmented humanity. AI can do the same. Business leaders can focus on building a stronger, more competitive business by using AI capabilities to augment and empower their employees.

They can, and some will. Others will use AI to automate tasks currently performed by the humans they employ.

Or, more likely, they’ll do both. Neither will be better in an absolute sense. But they will be different. As CIO you’ll have to help communicate the company’s intentions, whether AI is used for employee augmentation or replacement.

2. Skynet. This, the most chilling of the possible AI futures, is, as it happens, the least likely. It’s the least likely, not because killer robots aren’t possible, but because a volitional AI would have no reason to produce and deploy them.

In nature, organisms that hunt and kill other organisms are either predators that want food, or competitors for the same resources. Other than those of our fellow humans who hunt for sport it’s rare for species to harm members of other species just for the heck of it.

Except for electricity and semiconductors, it’s doubtful we and a volitional AI would find ourselves competing for resources intensely enough for the killer robot scenario to become a problem for us.

That’s especially because if an AI is competing with us for electricity and semiconductors it would be unlikely to squander the electricity and semiconductors it has to build killer robots.

3. Deepfakes. Yes, deepfakes are a problem and, as the pointy end of the war-on-reality spear they’re a problem that will get nothing but worse. Especially worrisome is the false sense of security purveyors of here’s-how-to-spot-deepfakes guidance provide (for example, this). They’re worrisome because, to the extent their techniques work, they’re an instruction manual on how to produce harder-to-detect deepfakes. And they contribute to a Lewis Carroll-esque “red queen” scenario — red queen because deepfake-creation AIs and deepfake-detection AIs will have to improve faster and faster just to stay in one place with respect to each other.

And so, just as malware countermeasures evolved from standalone antivirus measures to cybersecurity as a whole industry, we can expect a similar trajectory for deepfake countermeasures as the war on reality heats up.

AI risks as perceived by the CEO

CEOs who don’t want to quickly become former CEOs expend quite a lot of their time and attention on some form of “TOWS” analysis (threats, opportunities, weaknesses, and strengths).

As CIO, one of your most important responsibilities has, for quite some time, been to help drive business strategy by connecting the dots, from IT-based capabilities to business opportunities (if your business exploits them first) or threats (if a competitor exploits them first).

That was the case before the current wave of AI enthusiasm washed over the IT industry. It’s what “digital” was all about and is even more the case now.

Add AI to the mix and CIOs have another layer of responsibility, namely, how to integrate its new capabilities into the business as a whole.

The silent AI-based threat: Artificial human frailties

There’s one more class of risk to worry about, one that receives little attention. Call it “artificial human frailties.”

Start with Daniel Kahneman’s Thinking, Fast and Slow. In it, Kahneman identifies two ways we go about thinking. When we think fast, we use the cerebral circuitry that lets us identify each other at a glance, with no delay and little effort. Fast thinking is also what we do when we “trust our guts.”

When we think slow, we use the circuitry that lets us multiply 17 by 53 — a process that takes considerable concentration, time, and mental effort.

In AI terms, thinking slow is what expert systems, and for that matter, old-fashioned computer programming, do. Thinking fast is where all the excitement is in AI. It’s what neural networks do.

In its current state of development, AI’s form of thinking fast is also what’s prone to the same cognitive errors as trusting our guts. For example:

Inferring causation from correlation: We all know we aren’t supposed to do this. And yet, it’s awfully hard to stop ourselves from inferring causality when all we have as evidence is juxtaposition.

As it happens, a whole lot of what’s called AI these days consists of machine learning on the part of neural networks, whose learning consists of inferring causation from correlation.

Regression to the mean: You watch The Great British Baking Show. You notice that whoever wins the Star Baker award in one episode tends to bake more poorly in the next episode. It’s the Curse of the Star Baker.

Only it isn’t a curse. It’s just randomness in action. Each baker’s performance falls on a bell curve. When one wins Star Baker one week, they’ve performed at one tail of the bell curve. The next time they bake they’re most likely to perform at the mean, not at the Star Baker tail again, because every time they bake, they’re most likely to perform at the mean and not the winning tail.

And yet, we infer causation — the Curse!

There’s no reason to expect a machine-learning AI to be immune from this fallacy. Quite the opposite. Faced with random process performance data points we should expect an AI to predict improvement following each poor outcome.

And then to conclude a causal relationship is at work.

Failure to ‘show your work’: Well, not your work; the AI’s work. There’s active research into developing what’s called “explainable AI.” And it’s needed.

Imagine you assign a human staff member to assess a possible business opportunity and recommend a course of action to you. They do, and you ask, “Why do you think so?” Any competent employee expects the question and is ready to answer.

Until “Explainable AI” is a feature and not a wish-list item, AIs are, in this respect, less competent than the employees many businesses want them to replace — they can’t explain their thinking.

The phrase to ignore

You’ve undoubtably heard someone claim, in the context of AI, that, “Computers will never x,” where x is something the most proficient humans are good at.

They’re wrong. It’s been a popular assertion since I first started in this business, and it’s been clear ever since that no matter which x you choose, computers will be able to do whatever it is, and do it better than we can.

The only question is how long we’ll all have to wait for the future to get here.

Artificial Intelligence, IT Leadership, Risk Management


Read More from This Article: A CIO primer on addressing perceived AI risks
Source: News

Category: NewsMarch 19, 2024
Tags: art

Post navigation

PreviousPrevious post:Generative AI copilots are your productivity rocket boostersNextNext post:Control D Launches Control D for Organizations: Democratizing Cybersecurity for Organizations of All Sizes

Related posts

Barb Wixom and MIT CISR on managing data like a product
May 30, 2025
Avery Dennison takes culture-first approach to AI transformation
May 30, 2025
The agentic AI assist Stanford University cancer care staff needed
May 30, 2025
Los desafíos de la era de la ‘IA en todas partes’, a fondo en Data & AI Summit 2025
May 30, 2025
“AI 비서가 팀 단위로 지원하는 효과”···퍼플렉시티, AI 프로젝트 10분 완성 도구 ‘랩스’ 출시
May 30, 2025
“ROI는 어디에?” AI 도입을 재고하게 만드는 실패 사례
May 30, 2025
Recent Posts
  • Barb Wixom and MIT CISR on managing data like a product
  • Avery Dennison takes culture-first approach to AI transformation
  • The agentic AI assist Stanford University cancer care staff needed
  • Los desafíos de la era de la ‘IA en todas partes’, a fondo en Data & AI Summit 2025
  • “AI 비서가 팀 단위로 지원하는 효과”···퍼플렉시티, AI 프로젝트 10분 완성 도구 ‘랩스’ 출시
Recent Comments
    Archives
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.