The world plunged headfirst into the AI revolution. Now many are admitting they weren’t quite ready.
The 2024 Board of Directors Survey from Gartner, for example, found that 80% of non-executive directors believe their current board practices and structures are inadequate to effectively oversee AI.
The 2024 Enterprise AI Readiness Radar report from Infosys, a digital services and consulting firm, found that only 2% of companies were fully prepared to implement AI at scale and that, despite the hype, AI is three to five years away from becoming a reality for most firms.
And the Global AI Assessment (AIA) 2024 report from Kearney found that only 4% of the 1,000-plus executives it surveyed would qualify as leaders in AI and analytics.
To counter such statistics, CIOs say they and their C-suite colleagues are devising more thoughtful strategies. As part of that, they’re asking tough questions about their plans.
Here are 10 questions CIOs, researchers, and advisers say are worth asking and answering about your organization’s AI strategies.
1. What are we trying to accomplish, and is AI truly a fit?
ChatGPT set off a burst of excitement when it came onto the scene in fall 2022, and with that excitement came a rush to implement not only generative AI but all kinds of intelligence.
That rush of activity fed on itself, and FOMO took hold, says IT exec Ron Guerrier. Business and IT leaders thought they’d be left behind if they weren’t adopting AI as fast as the earliest users.
But CIOs need to get everyone to first articulate what they really want to accomplish and then talk about whether AI (or another technology) is what will get them to that goal.
“Too quickly people are running to AI as a solution instead of asking if it’s really what they want, or whether it’s automation or another tool that’s needed instead,” says Guerrier, currently serving as CTO at the charity Save the Children.
2. How does our AI strategy support our business objectives, and how do we measure its value?
Mike Mason, chief AI officer at global technology consultancy Thoughtworks, says this is a key question for him, as it ensures the company’s AI strategy will drive the outcomes executives have determined will bring organizational success.
Otherwise, organizations can chase AI initiatives that might technically work but won’t generate value for the enterprise. “There’s been a lot of that over the past year and a half,” Mason observes.
Meanwhile, he says establishing how the organization will measure the value of its AI strategy ensures that it is poised to deliver impactful outcomes because, to create such measures, teams must name desired outcomes and the value they hope to get.
“The time for experimentation and seeing what it can do was in 2023 and early 2024. I don’t think anyone has any excuses going into 2025 not knowing broadly what these tools can do for them,” Mason adds. “So the organization as a whole has to have a clear way of measuring ROI, creating KPIs and OKRs or whatever framework they’re using. And the tech side of the house should push to make sure there’s clarity on this.”
Many CIOs have work to do here: According to a September 2024 IDC survey, 30% of CIOs acknowledged that they don’t know what percentage of their AI proofs of concepts met target KPI metrics or were considered successful — something that is likely to doom many AI projects or deem them ‘just for show.’
3. What ROI will AI deliver?
As organizations seize on the potential of AI and gen AI in particular, Jennifer Manry, Vanguard’s head of corporate systems and technology, believes it’s important to calculate the anticipated ROI.
“Generative AI is a major investment and requires a substantial commitment in infrastructure and talent,” Manry says. “As a key strategic partner to the business, CIOs must consider the return that investment will create in terms of business value.”
Manry is mindful that some AI deployments will deliver modest ROIs and others will deliver significant returns. Both types of projects deserve attention, even as many CIOs still struggle to find ROI.
“CIOs must enable and rally around big strategic bets while democratizing generative AI across their teams. Employees will find ways to drive incremental value, efficiency, and automation. No small group can envision all the ways generative AI can transform daily work for every individual team/function, but they could provide input on the big strategic bets that you want to dedicate time and resources toward. That’s why it takes the collective power of everyone,” she adds.
4. Is our AI strategy enterprise-wide?
“It is critical that AI strategy is implemented across an organization and not just in one or two workstreams,” says Anant Adya, executive vice president and head of Americas delivery for Infosys. “When organizations try to install AI strategies in piecemeal ways, some workstreams may get left behind, and this could lead to an imbalance of AI progression and understanding in your organization.”
Adya advises CIOs who find that their organization’s AI strategy is not being equally implemented to “rethink their AI rollout plan.”
“If you have only been meeting with leaders in one area of your company about AI implementation, it is time to create a plan for an enterprise-wide AI program,” he says. “While you may have short-term success with AI implementation in one area of your company, if you don’t want your company to be left behind in the AI race, you need to form a comprehensive implementation plan — and see it through.”
To do that, assess the current AI strategy and take note of where AI is not being integrated into the organization’s practices. Formulate a plan to bring those workstreams up to speed. And ensure there are regular meetings with each of the company leaders who have been designated to help them implement the AI strategy.
“Having full visibility into the AI implementation happening across the organization is critical for each workstream’s success,” Adya adds.
On a related note, Adya says IT leaders should also ask whether their AI strategies “account for the fact that not all employees have a strong understanding of AI and its capabilities.”
“While CIOs and other leaders might have a strong understanding of how to use AI and the language that comes with it, it would be detrimental to your organization’s success to assume that employees of all levels have the same grasp on AI,” he says. “If your AI strategy and implementation plans do not account for the fact that not all employees have a strong understanding of AI and its capabilities, you must rethink your AI training program.”
5. Do we have the data, talent, and governance in place to succeed beyond the sandbox?
It’s typical for organizations to test out an AI use case, launching a proof of concept and pilot to determine whether they’re placing a good bet. These, of course, tend to be in a sandbox environment with curated data and a crackerjack team.
But as CIOs devise their AI strategies, they must ask whether they’re prepared to move a successful AI test into production, Mason says.
“They need to have the data, talent, and governance in place to scale AI across the organization,” he says. “They’re foundational pieces that an organization has to get right.”
Research confirms the need for CIOs and their executive colleagues to spend more thought on this, as only a fraction of AI POCs goes into production and only a portion of those that go into production are considered successful.
6. How confident are we in our data?
As Guerrier, his colleagues, and his team advance their organization’s use of AI, Guerrier puts a critical question to the crew: “How confident are we in our data?”
“Do we know the data we own and the data we ingest? Do we really understand that data ecosystem and how do we rate that [understanding] on a scale of 0 to 10? That’s always my opening inquiry on anything AI,” he says.
Guerrier acknowledges that data doesn’t have to be perfect for all use cases. But organizations still need to rate the data environment so they understand whether it’s strong enough for the specific AI projects they’re pursuing.
“Using AI to revamp a paragraph in a grant request, that’s low fidelity. But if you’re using AI to determine how to respond to a humanitarian crisis in a hurricane zone, when you’re talking about the lives and livelihood of people, that’s different,” he says.
7. Can our AI use be confidently defended if the organization is audited or deposed?
This is another question Guerrier wants people to consider as they advance their AI plans, adding that such phrasing pushes people to really consider how confident they are in the AI projects they want to pursue.
“For me it’s about being able to defend all the components,” he explains.
He says even if no one can be 100% comfortable with the quality and quantity of the data fueling AI systems, they should feel confident that the quality and quantity are high enough for the use case, that the data is adequately secured, and that its use conforms to regulatory requirements and best practices such as those around privacy.
Similarly, Guerrier says enterprise leaders need to be confident enough in their algorithms — that they’re ethical with safeguards against unintended biases, that the outcomes are verified and explainable, that they’re used ethically — so that they could defend them in an audit or deposition.
“Corporations have a responsibility to do more of that,” he says.
8. Are we prepared to handle the ethical, legal, and compliance implications of AI deployment?
On a similar note, Andy Sack, co-founder and co-CEO of Forum3, which provides AI and digital transformation solutions to companies, says CIOs must pose this question to themselves and other C-suite execs.
It’s a particularly relevant question now, as governments consider more AI regulations, the courts deal with AI-related cases, and society grapples with the real-world sometimes tragic consequences of the technology.
Sack says companies need to consider what ethical, legal, and compliance implications could arise from their AI strategies and use cases and address those earlier rather than later.
“Ethical, legal, and compliance preparedness helps companies anticipate potential legal issues and ethical dilemmas, safeguarding the company against risks and reputational damage,” he says. “If ethical, legal, and compliance issues are unaddressed, CIOs should develop comprehensive policies and guidelines. Additionally, they should consult with legal experts to navigate regulations and establish oversight committees.”
9. What’s our risk tolerance, and what safeguards are necessary to ensure safe, secure, ethical use of AI?
Manry says such questions are top of mind at her company.
“At Vanguard, we are focused on ethical and responsible AI adoption through experimentation, training, and ideation,” she says. “Resulting from senior leader and crew [employee] perspectives, our primary generative AI experimentation thus far has focused on code creation, content creation, and searching and summarizing information.”
She advises others to take a similar approach.
“CIOs must assess risk tolerance and implement safeguards for generative AI to address safety, security, and ethical concerns. By establishing healthy safeguards like data protection protocols and ethical guardrails, CIOs ensure responsible AI use and minimize risks,” she says. “Establish an AI governance framework that defines the organizations risk tolerance, and patterns of acceptable use based on data sensitivity, allowing low risk generative AI use cases to be fast-tracked while applying more rigorous evaluation on higher-risk applications.
“This approach enables teams to innovate safely and efficiently, while ensuring more rigorous safeguards for use cases involving sensitive data. By implementing robust security measures, bias mitigation techniques, and an ethical review process, CIOs can minimize risks and ensure responsible use of AI.”
Not all organizations are there yet, though: Data governance research from Lumenalta, which delivers custom digital solutions, found that only 33% of organizations have implemented proactive risk management strategies for AI governance.
10. Am I engaging with the business to answer questions?
CIOs shouldn’t be going it alone, says Sesh Iyer, managing director, senior partner and North America co-chair of BCG X, the tech build and design division of Boston Consulting Group.
“CIOs must ask themselves whether they are engaging with the business to deliver value with generative AI, whether there is a clear focus on gen AI with a defined pathway to achieving a meaningful return on investments within 12 months, whether they are leveraging the power of the digital ecosystem to support their gen AI agendas, [and] whether they have a clear plan to extract and use data at scale to achieve these goals,” Iyer says.
“These questions are crucial for CIOs to ensure they are delivering value, targeting spend effectively to achieve returns, and considering velocity-to-value — leveraging intellectual property and products from a broader ecosystem to reach value faster. Also, they must determine whether they have the ‘digital fuel’ (i.e., data and infrastructure) needed to achieve these AI-driven outcomes.”
He advises CIOs to “sit down with the business to devise or refine an integrated ambition agenda” and “develop clear business cases that demonstrate returns within 12 months, establish a robust ecosystem strategy, and actively engage with partners to maximize value.”
Read More from This Article: 10 AI strategy questions every CIO must answer
Source: News