Amid the turbulence of AI, technologies are emerging rapidly, startups are clamoring for attention, and hyperscalers are scrambling to corral market share. It’s an environment that taxes the decision-making skills of the even the most savvy CIOs. But ready or not, choices with far-reaching repercussions must be made. And standing still is not an option.
“AI moves fast, so we don’t want to delay. We need to be ready to respond to our CEO to solve problems with AI,” says Srini Gudipati, CIO of Covanta, a company that specializes in sustainable materials management, including large-scale recycling.
Brian Hopkins, vice president for emerging technology at Forrester Research, agrees. “There is no way we can sift through all this fast enough, but you can’t sit back and wait. The opportunity is too big. There are a lot of risks and a lot of land mines to navigate,” says the analyst.
One thing is certain: Large sums of money will be wagered on AI technologies in the next several years. According to IDC, core IT spending for AI will grow from $235.6 billion in 2024 to $521.0 billion on 2027.
Coming to grips with risk
The first step in making any bet — or investment — is to understand your ability to withstand risk. “We maintain a prudent and calculated approach to the adoption of AI technologies, aligning with our moderate risk appetite as a traditional financial institution,” says Vikram Nafde, CIO of Webster Bank, a commercial bank with over $75 billion in assets offering digital and traditional services through commercial banking, consumer banking, and HSA Bank, a provider of healthcare-focused financial services.
When it comes to AI, Nafde sees risks in the vendors selected, the business-worthiness of the use
case, and the cost of the initiative. The CIO has strategies in place to address all three.
For vendors, Nafde is starting with established with hyperscalers. “We want an ecosystem of large, established players,” he says. Given existing relationships with Webster Bank, that includes AWS and Microsoft. Despite his preference for working with large players, Nafde is also looking at smaller companies that have built generative AI tools that run on the hyperscalers. One such company has built a tool that predicts customer intent and behavior based on previous interactions and other market data.
To find promising use cases, Webster Bank canvassed several dozen proposals and decided to start with three that could deliver tangible benefits. The bank is now working these proof-of-concept (POC) initiatives: intelligent search for internal productivity, automation with gen AI capabilities to assist in syndicated commercial loan workflows, and customer attrition prediction.
To cope with the third risk area, cost, Nafde is spearheading efforts to empower in-house staff with the necessary skills, abilities, and tools to undertake AI initiatives internally. “By investing in the development of our full-time equivalents [FTEs] and equipping our technologists with the requisite expertise, we aim to minimize reliance on external consultants and maximize our ability to drive innovation from within,” says Nafde.
At Covanta, Gudipati is implementing AI on a case-by-case basis that focuses on solving one problem at a time with implementations that are well within the capabilities of proven technologies. “We are proceeding cautiously because the rise of LLMs [large language models] presents a new level of data security risk,” he says. “We have been developing our own internal AI capability over the last few years using open-source models. This ensures that none of our sensitive data and intellectual property are availed to an outside provider.”
One POC at Covanta seeks to mitigate the risk of bringing hazardous materials into Covanta facilities by identifying items such as propane tanks that have entered Covanta facilities along with harmless recyclables. “The technology we are exploring uses AI and X-ray technology analyze inbound trucks to spot the telltale profile of a propane tank before it can explode upon impact with Covanta’s huge shredders,” says Gudipati. It’s a significant danger with significant costs. Eliminating the danger of fires, which cost about $10,000 per hour, should enable the implementation to more than pay for itself, according to Gudipati.
In another implementation, Covanta is using AI to examine the websites of potential customers to gauge a company’s carbon footprint, and whether a company might be eligible for federal energy credits. “Our data team uses gen AI on Amazon cloud to explore sustainability metrics. So there is a revenue-generating aspect for this,” he says.
In still another implementation, Covanta is using Salesforce’s CRM case management tool to create invoices and enable customers to talk directly to a Salesforce robot to answer any invoice questions. Covanta is also using AI to perform legal review on contracts, detecting and highlighting points of possible risk. “These are good examples of using tools silently and quietly,” says Gudipati.
Caution in the public sector
Traditionally among the most risk-averse of organizations, governmental bodies must operate under scrutiny from a public that has little tolerance for projects that either don’t pay off or put citizens’ data at risk.
“At Harris County, we keep data confidentiality, integrity, and availability in the forefront of our design and architecture of solutions,” says Sindhu Menon, executive director for Harris County Universal Services and County CIO for Harris County, the largest county in Texas and third largest county in the nation based on population, an agency that serves 4.7 million citizens in the Houston area.
Like Gudipati and Nafde, Menon and her team are planning to use hyperscalers as a relatively low-risk option. Though a multicloud environment, the agency has most of its cloud implementations hosted on Microsoft Azure, with some on AWS and some on ServiceNow’s 311 citizen information platform. Harris County has about half dozen AI-based POCs in the planning stage, including one that modernizes permit processing and another that modernizes justice processes, according to Menon.
Laying the foundation
To develop POC implementations, Menon and her team are establishing a lab that is expected to debut in March 2024 for testing AI tools before rollout. The lab, housed in a county office building, will pull members from multiple departments, including the county’s data team and architecture team.
“There is a great deal of interest to participate in the testing and participation across the County. Our goal is to bring the teams together and provide a secure environment to learn and test solutions,” she explains. For a typical project that will likely involve a Snowflake data lake hosted currently on Azure, Menon stresses that quality of data is critical. “AI tools rely on the data in use in these solutions. Good data management practices will be needed to get the desired results and AI solutions,” she says.
Similarly, Nafde put together an AI governance team of some two dozen people led by Webster Bank’s chief enterprise architect and chief data officer that includes technologists, risk and compliance staff, and lawyers. A key focus of the bank’s AI team is likewise data quality. To that end, the group has implemented data quality and governance tools for the bank’s Snowflake environment.
For Gudipati of Covanta, the first step was making the company “AI-ready” by building a robust and comprehensive data foundation on which AI technologies and services could be implemented.
“AI is nourished by high-quality data, so we created a comprehensive data management fabric using Talend, leveraging Snowflake for our operational data store and warehouse,” Gutipati explains. “We then implemented a comprehensive suite of AI tools on AWS that natively work well together to give us true AIOps. We were using Amazon extensively for our infrastructure and data storage so it made sense to go with them,” continues Gudipati, who adds, “We finished the foundation and infrastructure upon which AI could truly be built out to its full potential.”
Risk of lock-in
Because running AI algorithms is not cheap, looming over every project is the risk of higher- than-expected cost.
“The AI engines are expensive to run because they consume many more processors than conventional AI, so we have to keep an eye on costs,” says Gudipati.
Nafde agrees. “People don’t realize the AI models have to churn so many compute resources. They don’t grasp how much that can cost,” says Nafde. “We have cost triggers for the compute services. We believe we can manage the run cost because we will continually assess the costs.”
Committing to a cloud service provider, including a hyperscaler, is not without the risk of lock-in. Although it is possible to move from one cloud provider to get a better deal on another, the labor and expense of making the move are daunting and might offset any potential savings. Snowflake, for example, runs on both Microsoft Azure or AWS, so it would be possible to move from one to the other. “I don’t think it’s impossible, but you would need to do some groundwork. It’s good to think about it ahead of time,” says Gudipati.
Don’t just stand there, do something
For CIOs, there have been few previous technologies that carry with them the imperative to act that comes with generative AI. Risk-mitigation strategies are up against the push from top-level executives who don’t want their companies to be left behind.
“This might be the first time in history that executives who are not technical can see something and get excited about it because they can engage with it. That has been a tipping point for board-level interest,” says Hopkins of Forrester.
In financial services, Nafde sees startups such as Stripe, a payments company, and MX, a mobile app, that could use AI to take over customer relationships. “User behavior could change so much that people don’t think of banks, but the payment app they are using,” says Nafde. “Fintechs and startups are going to leverage AI to either leapfrog established players or burn out.”
Unlike startups, however, established companies cannot risk the losses that might come from betting all on AI. Their challenge is to steer a middle course that yields bottom-line results. Says Gudipati, “We don’t tell the whole world we are an AI-based company, but we use it as a day-to-day problem-solving tool.”
Artificial Intelligence, Generative AI, IT Leadership, IT Strategy, Risk Management
Read More from This Article: CIOs weigh where to place AI bets — and how to de-risk them
Source: News