In recent months, artificial intelligence has been everyone’s favorite buzzword. Both Silicon Valley startups and Fortune 500 companies see industries revolutionize as AI steadily picks up pace. But excitement, progress, and red flags like AI washing, are developing in equal measure. Some businesses, desperate to get on the gravy train, want to cash in on the hype, so they overstate their AI capabilities despite the fact that, in reality, the AI they employ is minimal or nonexistent.
This questionable marketing strategy can help them receive larger seed, A, and B funding rounds compared to non-AI startups. Last year alone, AI startups raised more than $50 billion in venture capital funding, according to GlobalData, and the numbers are expected to grow this year given the frenzy surrounding ChatGPT and others.
Given the capital poured into these startups, the AI washing phenomenon will only grow in intensity. The US Federal Trade Commission is fully aware of the danger, and warns vendors to be transparent and honest when advertising their AI capabilities.
“Some products with AI claims might not even work as advertised in the first place,” attorney Michael Atleson, FTC division of advertising practices, wrote in a blog post. “In some cases, this lack of efficacy may exist regardless of what other harm the products might cause. Marketers should know that — for FTC enforcement purposes — false or unsubstantiated claims about a product’s efficacy are our bread and butter.”
In this complex landscape, it can be difficult to distinguish between legitimate AI solutions and marketing gimmicks.
“Companies need to apply a healthy dose of skepticism when faced with vendor claims about their AI products,” says Beena Ammanath, executive director of the Deloitte Global AI Institute. “As with anything, if it sounds too good to be true, it very likely is.”
If CIOs and their companies don’t find the correct answers, they can face consequences that include failed or late projects, financial losses, legal cases, reputational risk, and, ultimately, getting fired, says Donald Welch, CIO at New York University. “I’ve seen executives fired, and I can’t say it was the wrong decision.”
Fortunately, there are several strategies they can use to avoid mistakes.
AI-powered businesses need skilled employees
Vetting businesses that claim to use AI can be a long and time-consuming process. However, simple things, such as performing a LinkedIn search, could uncover valuable insights into an organization’s profile.
“Examine the level of AI experience and education that the vendors’ employees have,” says Ammanath. “Companies that are developing AI solutions should have the talent to do so, meaning they have data scientists and data engineers with deep experience in AI, machine learning, algorithm development, and more.”
In addition to examining employees, CIOs could also look for evidence of collaboration with external AI experts and research institutions. This category includes partnerships with universities, participation in industry conferences and events, and contributions to open-source AI initiatives.
It’s also a good sign if that vendor has experience with similar projects or applications since it shows it can deliver quality results.
“Carefully check the history of the supplier,” says Vira Tkachenko, chief technology and innovation officer at Ukrainian-American startup MacPaw. “If a company is an AI expert, it most likely has a history of research papers in this field or other AI products.”
Look for a well-crafted data strategy
Companies that truly integrate AI into their products also need a well thought out data strategy because AI algorithms need it. They need to work with high-quality data, and the more generous and relevant that data is, the better the results will be.
“AI systems are fueled by very large amounts of data, so these companies should also have a well-constructed data strategy and be able to explain how much data is being collected and from which sources,” Ammanath says.
Another thing to look at is whether these companies put enough effort into complying with regulatory requirements, and maintain high data privacy and security standards. With the rise of data privacy regulations such as the General Data Protection Regulation (EU GDPR) and the California Consumer Privacy Act (CCPA), organizations have to be transparent about their data practices and provide individuals with control over their personal data. If this doesn’t happen, it should be a red flag.
Request evidence to back the claims
While buzzwords can be seductive, it helps to gently ask for evidence. “Asking the right questions and demanding proof of product claims is critically important to peel away the marketing and sales-speak to determine if a product is truly powered by AI,” Ammanath says.
CIOs who evaluate a specific product or service that appears to be AI-powered can ask how the model was trained, what algorithms were used, and how the AI system will adapt to new data.
“You should ask the vendor what libraries or AI models they use,” says Tkachenko. “They may have just everything built on a simple OpenAI API call.”
Matthias Roeser, partner and global leader of technology at management and technology consulting firm BearingPoint, agrees. He adds that components and framework should be thoroughly understood, and the assessment should include “ethics, biases, feasibility, intellectual property, and sustainability.”
This inquiry could help CIOs learn more about the true capabilities and the limitations of that product, thereby helping them decide whether to purchase it or not.
Pay attention to startups
Startups position themselves at the forefront of innovation. However, while many of them push the boundaries of what’s possible in the field of AI, some may simply exaggerate their capabilities to gain attention and money.
“As a CTO of a machine learning company myself, I often encounter cases of AI washing, especially in the startup community,” says Vlad Pranskevičius, co-founder and CTO of Ukrainian-American startup Claid.ai by Let’s Enhance. He noticed, though, that recently the situation has become more acute, adding that this phenomenon is especially dangerous during hype cycles like the one currently being experienced, as AI is perceived as a new gold rush.
Pranskevičius believes, though, that AI washing will be kept in check in the near future as regulations around AI become more stringent.
Build a tech professional reputation
It’s not uncommon for a company to acquire dubious AI solutions, and in such situations, the CIO may not necessarily be at fault. It could be “a symptom of poor company leadership,” says Welch. “The business falls for marketing hype and overrules the IT team, which is left to pick up the pieces.”
To prevent moments like these, organizations need to foster a collaborative culture in which the opinion of tech professionals is valued and their arguments are listed thoroughly.
At the same time, CIOs and tech teams should build their reputation within the company so their opinion is more easily incorporated into decision-making processes. To achieve that, they should demonstrate expertise, professionalism, and soft skills.
“I don’t feel there’s a problem with detecting AI washing for the CIO,” says Max Kovtun, chief innovation officer at Sigma Software Group. “The bigger problem might be the push from business stakeholders or entrepreneurs to use AI in any form because they want to look innovative and cutting edge. So the right question would be how not to become an AI washer under the pressure of entrepreneurship.”
Go beyond the buzzwords
When comparing products and services, it’s essential to evaluate them with an open mind, looking at their attributes thoroughly.
“If the only advantage a product or service has for you is AI, you should think carefully before subscribing,” Tkachenko says. “It’s better to study its value proposition and features and only start cooperation when you understand the program’s benefits beyond AI.”
Welch agrees: “Am I going to buy a system because they wrote it in C, C++, or Java?” he asks. “I might want to understand that as part of my due diligence on whether they’re going to be able to maintain the code, company viability, and things like that.”
Doing a thorough evaluation may help organizations determine whether the product or service they plan on purchasing aligns with their objectives and has the potential to provide the expected results.
“The more complex the technology, the harder it is for non-specialists to understand it to the extent it enables you to verify that the application of that technology is correct and makes sense,” Kovtun says. “If you’ve decided to utilize AI tech for your company, you better onboard knowledgeable specialists with experience in the AI domain. Otherwise, your efforts might not result in the benefits you expect to receive.”
Follow AI-related news
Being up to date on AI-related products and the issues surrounding them can help CIOs make informed decisions as well. This way, they can identify potential mistakes they could make and, at the same time, leverage new ideas and technologies.
“I don’t think there’s enough education yet,” says Art Thompson, CIO at the City of Detroit.
He recommends CIOs do enough research to avoid falling into a trap with new or experimental technology that promises more than it can deliver. If that happens, “the amount of time to rebid and sort out replacing a product can really harm staff from being able to get behind any change,” he says. “Not to mention the difficulty in people investing time to learn new technologies.”
In addition, being informed on the latest AI-related matters can help CIOs anticipate regulatory changes and emerging industry standards, which can help them be compliant and maintain a competitive edge.
And it’s more than just the CIO who needs to stay up to date. “Educate your team or hire experts to add the relevant capabilities to your portfolio,” says BearingPoint’s Roeser.
Additional regulatory action around AI
New regulations on the way could simplify the task of CIOs seeking to determine whether a product or service employs real AI technology or not. The White House recently issued an AI Bill of Rights with guidelines for designing AI systems responsibly. And more regulations might be issued in the coming years.
“The premise behind these actions is to protect consumer rights and humans from potential harm from technology,” Ammanath says. “We need to anticipate the potential negative impacts of technology in order to mitigate risks.”
Ethics shouldn’t be an afterthought
Corporations tend to influence the discourse on new technology, highlighting the potential benefits while often downplaying the potential negative consequences.
“When a technology becomes a buzzword, we tend to lose focus on the potentially harmful impacts it can have in society,” says Philip Di Salvo, a post-doctoral researcher at the University of St. Gallen in Switzerland. “Research shows that corporations are driving the discourse around AI, and that techno-deterministic arguments are still dominant.”
This belief that tech is the main driving force behind social and cultural change can obscure discussions around ethical and political implications in favor of more marketing-oriented arguments. As Di Salvo puts it, this creates “a form of argumentative fog that makes these technologies and their producers even more obscure and non-accountable.”
To address this, he says there’s a crucial challenge to communicate to the public what AI actually isn’t and what it can’t do.
“Most AI applications we see today — including ChatGPT — are basically constructed around the application of statistics and data analysis at scale,” says Di Salvo. “This may sound like a boring definition, but it helps to avoid any misrepresentation of what ‘intelligent’ refers to in the ‘artificial intelligence’ definition. We need to focus on real problems such as biases, social sorting, and other issues, not hypothetical, speculative long-terminist scenarios.”
Artificial Intelligence, CIO, IT Leadership, Vendor Management, Vendors and Providers
Read More from This Article: 9 ways to avoid falling prey to AI washing
Source: News