Last year, as many CIOs ramped up for their first round of Scope 3 reporting, gen AI found its way into virtually every office. Sometimes it came in through the front door, but in most cases, it seeped in quietly, as knowledge workers experimented with it to write documents and email without necessarily admitting they were doing so.
In many organizations, the use cases have stopped there, but some IT departments are now sanctioning — and even encouraging — the use of gen AI for things like coding. Still, other organizations look to their software providers for upgrades that include gen AI components. Then at the far end of the spectrum are companies like Swedish fintech company Klarna, which has integrated gen AI not only in a range of internal projects, but also in products they sell — and have developed AI governance that includes guidelines on how AI should be used on projects.
Klarna has been leaning heavily into AI since ChatGPT was launched in November 2022, and the general feeling within the company is that gen AI can help nearly everybody in the organization become more effective, regardless of their skill level or role. “We’re currently looking at around a hundred initiatives both in production and in development across the company where we might use gen AI,” says Martin Elwin, senior engineering director at Klarna. “And it’s not only engineers doing this, but everyone from finance and legal, to marketing and everywhere else.”
Klarna
Several weeks ago, Klarna announced an AI assistant that answers user questions with little or no human support. Its software helps consumers find the things they want to buy from the most relevant merchant, and it helps with payments and post-sales support. According to Daniel Greaves, communications lead at Klarna, the new gen AI was immediately successful. “Within about four weeks after launching, the AI assistant has taken over two-thirds of our customer service chat requests, and is doing the job of the equivalent of about 700 people,” he says.
But these and other uses of AI, as beneficial as they might sound, are raising eyebrows. “On the surface and as it exists today, AI and sustainability take you in opposite directions,” says Srini Koushik, president of AI, technology and sustainability at Rackspace Technology. “AI consumes a lot of power, whether it’s training large language models or running inference. And this is only the beginning. The power consumption is growing exponentially.”
Nevertheless, Koushik and many other technologists argue that AI’s benefits far outweigh its ever-growing carbon footprint, which may not be the case for other energy-hungry applications, such as cryptocurrencies. AI holds promise in aiding researchers to discover more efficient energy sources such as nuclear fusion, optimize the utilization of current energy sources through enhanced power distribution, and measure the ramifications of CO2 emissions by analyzing climate patterns. “AI will benefit humanity in many ways,” says Koushik. “And from the point of view of my own enterprise, if one of the benefits of AI is it saves me from sending somebody on a flight from New York to London, I’ve offset the consumption.”
Rackspace Technology
Whether or not AI delivers on its promises over the long term, CIOs required to account for their full carbon impact now need to include the impact of AI in their Scope 3 reporting — and that gets complicated very fast. For example, if you run inference with a model that was trained by somebody else, you should report on your share of the CO2 impact. The provider might be able to tell you the overall cost of training, but nobody knows how to divvy up that cost among all the users over the lifetime of the model.
“None of this is clear yet, because Scope 3 reporting is new and so is gen AI,” says Niklas Sundberg, chief digital office and SVP at Swiss global transport and logistics company Kuehne+Nagel. Sundberg knows about as much as anybody on Scope 3 reporting and covers the subject in his book Sustainable IT playbook for technology leaders.
Despite the ambiguities, IT leaders are charging ahead with AI. Along the way, some have discovered three things they can do to mitigate the impact on their own sustainability initiatives. They share them here.
1. Use a big provider to optimize utilization
“We are already advanced users of AI, and one of the things we recommend is to use AI, especially inference, through providers that have shared on-demand AI inference environments,” says Elwin. This makes sense because the more people using a public cloud service, the higher the utilization rates. The improvement in the use of resources in running power-hungry AI applications could make a difference in your organization’s overall carbon footprint.
CIOs can take it a step further by asking providers a list of questions, starting with how they train their models and how inference is run. “If you’re only buying inference services, ask them how they can account for all the upstream impact,” says Tate Cantrell, CTO of Verne, a UK-headquartered company that provides data center solutions for enterprises and hyperscalers. “Inference output takes a split second. But the only reason those weights inside that neural network are the way they are is because of massive amounts of training — potentially one or two months of training at something like 100 to 400 megawatts — to get that infrastructure the way it is. So how much of that should you be charged for?”
Verne
Cantrell urges CIOs to ask providers about their own reporting. “Are they doing open reporting about the full upstream impact that their services have from a sustainability perspective? How long is the training process, how long is it valid for, and how many customers did that weight impact?”
According to Sundberg, an ideal solution would be to have the AI model tell you about its carbon footprint. “You should be able to ask Copilot or ChatGPT what the carbon footprint of your last query is,” he says. “As far as I know, none of the tools will give you a response to that question at the moment.”
2. Use the most appropriate model to solve each piece of the problem
When Klarna built their AI assistant, they didn’t use just one AI model to do everything. Instead, they went through a process where they evaluated every step of the service to see what was really needed for each part. “We strove to be resource efficient,” says Elwin. “We made sure we were using as small a model as possible that delivered the capability needed to complete a given step.”
Klarna has generalized this idea by issuing guidelines to make sure teams think this way when building other solutions. One step might require a comprehensive model, like GPT-4, while another part of the service works fine with a lighter model like GPT-35 Turbo.
Smaller models require less electricity not only during the training phase, but also for inference. Ultimately enterprises will have to measure energy consumption — and that might be on a per-query basis, where the smaller models do much better. “You don’t need GPT-4 to do claims adjudication in an insurance setting,” says Koushik. “You need something that’s smaller and trained on more domain-specific data, which is more accurate at answering questions in your domain than using GPT-4.”
Even though large companies have been working with machine learning for quite some time now, their models aren’t as sophisticated as the larger open-source models, says Sundberg. “But they do a better job of solving very specific corporate problems like pricing and predicting customer churn.”
3. Prioritize your use cases
A CIO can take a balanced view of the use cases and prioritize them. “Most people don’t need Copilot,” says Koushik. “The benefits of writing better emails don’t justify the subscription cost and the CO2 emissions. On the other hand, our legal department does benefit from Copilot in ways that offset the cost, so we’ve rolled it out to them.”
Kuehne+Nagel
Prioritizing use cases means IT leaders will have to tell some users that AI is not an appropriate solution to their problem. The best way to avoid ruffling feathers is to establish clear guidelines early. Start out by finding ways of measuring the carbon footprint of AI tools, and then for each use case, compare that to the potential benefits. “It’s important for CIOs to have metrics on the CO2 emission for a given application,” says Sundberg. “This allows them to weigh the costs and benefits. If you can’t find out what the carbon footprint is by yourself, try asking your software vendor.”
But what makes this more challenging is that vendors aren’t always saying what they know. “While gen AI can unlock a lot of sustainability opportunities, there’s also a dark side not being discussed—certainly not by the vendors,” says Sundberg. “They’re too focused on the race to achieve top provider status in their space.”
Read More from This Article: 3 things CIOs can do to make gen AI synch with sustainability
Source: News