Artificial intelligence has moved from the research laboratory to the forefront of user interactions over the past two years. Whether summarizing notes or helping with coding, people in disparate organizations use gen AI to reduce the bind associated with repetitive tasks, and increase the time for value-acting activities.
Some experts suggest the result is a digital revolution. AI enables the democratization of innovation by allowing people across all business functions to apply technology in new ways and find creative solutions to intractable challenges.
“Generally, I’d say we should be really excited about gen AI,” says Cynthia Stoddard, CIO at Adobe. “It’s going to help us change the way people work and bring those activities to a different level, where you can work more productively than you might have in the past.”
But it’s not all good news. Stoddard recognizes executives must be cautious because gen AI can be used less productively. From fostering an over-reliance on hallucinations produced by knowledge-poor bots, to enabling new cybersecurity threats, AI can create significant problems if not implemented carefully and effectively.
These issues mean many gen AI projects remain stuck at the prototyping stage. Consulting giant Deloitte says 70% of business leaders have moved 30% or fewer of their experiments into production. Meanwhile, Gartner predicts at least 30% of gen AI projects will be abandoned after the proof-of-concept stage by 2025.
To make the most of emerging technology, businesses must ensure the democratization of innovation doesn’t lead to chaos, and that responsibility should fall on the CIO. After all, with all the rich experience in leading technology implementations securely and effectively, they’re best placed to help their businesses embrace the benefits of digital innovation.
So the onus now is on CIOs to become the pacesetters for change and give their business peers the strategic assistance they require. As senior product owner for the Performance Hub at satellite firm Eutelsat Group Miguel Morgado says, the right strategy is crucial to effectively seize opportunities to innovate.
“It’s like AI now – will you invest heavily and think this is another Industrial Revolution, or will you think it’s just hype and do nothing?” he says. “In three or four years, we’ll see the results. Selecting the right strategy now will dictate if you’re successful in four years.”
Shaping the strategy for innovation
Unfortunately, establishing a strategy for democratizing innovation through gen AI is far from straightforward. Many factors, including governance, security, ethics, and funding, are important, and it’s hard to establish ground rules.
“The truth of generative AI is that you do your best because it’s a relatively unknown technology,” says Ollie Wildeman, VP, customer, at travel specialist Big Bus Tours. So where should companies start this complicated process?
He says the answer is to focus on the people who want to access innovation and then bring in experts who can set ground rules. “Gen AI must be driven by people who want to implement the technology,” he says. “Then there must be a sense of checking from as many different parts of the business as possible.”
Wildeman is a non-IT specialist who has pushed for the implementation of AI. His customer service department uses Freshworks Customer Service Suite, which includes AI-powered chatbots to manage user requests. He’s also adding other emerging technologies, including using Freshworks’ generative tool, Freddy AI, to summarise service requests.
Getting approval for these innovative initiatives involves an iterative process, and a tech steering group at Big Bus Tours leads decisions on AI. The group includes the CTO, the VP of technology, and business leaders from other functions, including finance and HR.
“Everybody listens to what the product is, and they ask questions,” says Wildeman. “To get my AI project over the line, I went to the committee four or five times with amended presentations. In the case of Freddy, they could see we’re working with an existing supplier we trust, and we’ve been working with them for a long time.”
The key to realizing the potential of emerging technology is all about proving a use case. That message resonates strongly with Niall Robinson, head of product innovation at the Met Office. In his role at the UK’s national weather and climate service, Robinson and his team explore how the organization can create value through product innovation and strategic partnerships.
He emphasizes the importance of PoC studies in gaining stakeholder buy-in, and the role of data science, ML, and AI to enhance weather forecasting. For example, the Met Office is using Snowflake’s Cortex AI model to create natural language descriptions of weather forecasts.
Robinson says AI is a big deal in the scientific and weather-forecasting community. However, emerging technology must be used carefully. In his organization, that process means exploring options, comparing technological options, and working with trusted advisors.
“Currently, we don’t have gen AI-driven products and services,” he says. “We use machine learning all the time. We’ve got 500-plus PhD scientists in the Met Office who use cluster analysis and neural networks, and have done so for a decade or two. We’re also working with the UK government to develop policies for using AI responsibly and effectively.”
Refining the CIO role
What’s clear is tech-led innovation is no longer the sole preserve of the IT department. Fifteen years ago, IT was often a solution searching for a problem. CIOs bought technology systems, and the rest of the business was expected to put them to good use.
Today, CIOs and their teams speak with their peers about their key challenges and suggest potential solutions. But gen AI, like cloud computing before it, has also made it much easier for users to source digital solutions independently of the IT team.
That high level of democratization doesn’t come without risks, and that’s where CIOs, as the guardians of enterprise technology, play a crucial role. IT leaders understand the pain points around governance, implementation, and security. Their awareness means responsibility for AI, and other emerging technologies have become part of a digital leader’s ever-widening role, says Rahul Todkar, head of data and AI at travel specialist Tripadvisor.
“I think CDOs and CIOs are willingly or unwillingly in the hot seat to at least have a point of view and shape the narrative in that particular area,” he says. “That’s because they’re the ones who bridge the gap between AI and technology, and business applications. So, digital leaders play a bridging role.”
James Fleming, CIO at the Francis Crick Institute in London, says it’s a similar situation in his world-leading research organization where he’s the executive who’s become the moral focal point for AI and innovation. “It’s certainly defaulted to being the CIO at the Crick,” he says. “I suppose that’s as good a solution as any because, with any brand-new technology like this, someone’s got to take the lead on first understanding it and then communicating that understanding to the rest of the organization. It was obvious that responsibility sat with me.”
However, the democratization of innovation means good ideas can come from any direction, and CIOs can’t afford to work in isolation. Fleming says effective digital leaders are interpreters of change and wise counsel to the executive team.
“Big decisions on anything must be made across the organization,” he adds. “CIOs should provide oversight. Innovation has been democratized, yes, but that process doesn’t mean change is more successful. As a CIO, you’ve got to stay ahead of the curve to be effective, rather than just be reactive to change, because you don’t want to lead the department that says no to everything.”
Finding a balance between risk and reward
Saying yes requires strong policies, suggests Dave Moyes, partner of information and digital systems at SimpsonHaugh Architects. CIOs who want to ensure the democratization of innovation doesn’t backfire must put rules and regulations in place, especially for gen AI.
“One of the first things we did was draft an AI policy that told staff what is and isn’t acceptable,” he says. “The policy says, ‘If you’re unsure, ask.’ Just come and say, ‘We’ve found this tool. We want to use it for X, Y, or Z. Is that going to be okay, or will we have issues?’”
Moyes is another CIO who takes on the role of moral arbiter for emerging technologies. However, he doesn’t work in a silo. He drafted the AI policy, presented it to the board, and other partners reviewed the approach. He says this collaborative approach with other senior stakeholders weighs the benefits against the risks.
“We want to encourage staff to make the most out of the tools because, ultimately, they’ll give them benefits, save them time, and let them do the things they love doing rather than things they need to do as part of their jobs,” he says.
Fleming says the Francis Crick Institute has also convened a multi-disciplinary working group to assess publicly available gen AI tools. The group, which includes representatives from science, operations, legal, and HR, considers key questions, such as use cases, potential technologies, and possible restrictions.
While they haven’t found a killer use case for a big investment, the group did identify point use cases for gen AI, including helping researchers write grant applications when English isn’t their first language. “With that use case, great, go for it,” he says. “That problem doesn’t merit a multi-million-pound investment in gen AI, but we keep an eye on all emerging technologies.”
That approach resonates with Tripadvisor’s Todkar, who says the potential productivity boost from emerging technology is significant as long as businesses proceed with caution. The key to success is ensuring a human expert stays in the loop to validate processes.
“The democratization of innovation must be qualified and calibrated by humans,” he says. “You can build a new tool and have certain aspects of automation, task completion, or even decision-making and inferencing based on large language models, but you must have a calibration, validation, and refinement of those results, with a professional checking what those models do. If you have a feedback loop and ensure AI doesn’t cross ethical thresholds and boundaries, you’ll always have that check in place.”
Read More from This Article: The new calling of CIOs: Be the moral arbiter of change
Source: News