In mid-November, OpenAI’s board fired the CEO of the company, Sam Altman, the guy who put ChatGPT on the map and ushered in a new era of corporate AI deployments. Within the next three days, nearly all of the company’s employees said they’d walk out the door, and the fate of OpenAI looked extremely uncertain.
Entire businesses have been built on top of OpenAI and its APIs.
According to an O’Reilly survey released late last month, 23% of companies are using one of OpenAI’s models. Its closest commercial competitor, Google’s Bard, is far behind, with just 1% of the market. Other respondents said they aren’t using any generative AI models, are building their own, or are using an open-source alternative.
Putting aside the fact this is an astronomically high adoption rate for a brand new technology, it’s also an indicator of how risky this space is. An enterprise that bet its future on ChatGPT would be in serious trouble if the tool disappeared and all of OpenAI’s APIs suddenly stopped working. So if OpenAI came within a hair’s breadth of collapsing overnight, what does this say about the survival odds of the innumerable start-ups in this space?
According to G2’s latest state of software report, AI is the fastest-growing software category in G2 history. The company now tracks a total of 1,078 AI vendors, and AI categories gained 643 new products over the previous year.
Synthetic media, which includes AI-generated text, images, audio, and video, grew by 222% compared to the previous year. And the AI writing assistant category grew by 177%. So enterprises looking for generative AI vendors have a lot of options to choose from.
“We’ve been conducting extensive research with partners like Gartner, McKinsey, and others to understand the market landscape and how other companies are using this technology,” says Yexi Liu, CIO of food products multinational Rich Products. The $3.8 billion company has 11,000 employees and a presence in more than 100 different countries, and has already picked its broadest AI providers — Microsoft, SAP and Salesforce.
Beyond that, most vendors are still falling short.
“Many generative AI vendors claim they offer an end-to-end AI solution,” Liu says. “But the reality is many of these companies are still in the early stages. There’s no clear leader in the market yet.”
When assessing vendors, Rich Products looks at their technology, architecture, business value, and pragmatic perspective. The goal, he says, is to understand how AI will benefit Rich’s business overall. “We look at the vendor’s maturity and if they have proven success in the right focus areas for our business,” he says.
He’s not the only one. According to an Ernst & Young survey of 1,200 global CEOs released in late October, 99% are either planning or are already making “significant” investments in generative AI. But it’s not exactly a safe bet. The risk of going out of business is just one of many disaster scenarios that early adopters have to grapple with. There’s also the ever-present threat of copyright lawsuits related to AI-generated text and images, accuracy of AI-generated content, and the risk of having sensitive information become training data for the next generation of the AI model — and getting exposed to the world. There’s bias in both the training data sets and in the results, and there are ethical concerns, runaway costs, integration challenges, model drift, lack of transparency, data security risks, plagiarism risks, and regulatory risks.
And it’s not just start-ups that can expose an enterprise to AI-related third-party risk. Established vendors are racing to add generative AI to their products and services as well.
Taking a wait-and-see attitude toward generative AI carries significant risks as well, including losing staff and customers to more nimble competitors, and falling behind when it comes to understanding how to use the new technology.
So the top questions that go beyond the usual due diligence that companies must ask when evaluating generative AI vendors have to with training data, copyright, added value, and model independence.
Data privacy, security, and compliance
For Rich Products, data protection, responsible AI, and trustworthy AI are critical.
“It’s imperative we protect our IP and ensure our AI solutions will be designed to be fair, unbiased, safe, and explainable,” he says. “This is non-negotiable and something we’ll clearly define with the vendor up front. We aren’t going to enter into a partnership on blind trust.”
In addition, for particularly sensitive business information and data, he expects to see even more security. “The vendor must offer the capability for us to build the AI solution in our own tenant,” he says.
Many enterprises already had cybersecurity and data privacy at or near the top of their checklists when selecting vendors, whether AI or not. And in regulated industries, vendors must also comply with specific regulations, such as HIPAA or PCI.
The same approach can be extended to include generative AI vendors, products, and services, but there are some new twists. For example, companies should already ask what kind of security audits and standards vendors have in their cloud environments, says Gartner analyst Arun Chandrasekaran.
Now, with generative AI, they should also ask about the measures vendors take to ensure that data remains private and isn’t used to train and enrich their models, he says.
“How is the prompt data stored in their environment?” he asks. “Can I run it in my own virtual cloud?”
Megan Amdahl, SVP of partner alliances and operations at Insight, an Arizona-based solution integrator, says her company evaluates generative AI vendors both for internal use and on behalf of its clients.
Insight has a partner contract management team that looks closely at vendor agreements.
“If they have any terms we consider risky or questionable, we require executive review,” she says. “And we don’t just have our contracts team in place for the original signing, but also to review all the addendums they’re requesting, to make sure we’re protecting against any types of risk that can be inserted.”
This isn’t just a theoretical concern. Earlier this year, video conferencing vendor Zoom added generative AI capabilities, including automated meeting summaries. In March, it gave itself the right to use customer data to train its models. Enterprises were up in arms when people discovered the fine print this summer and Zoom quickly reversed course.
Model training
Vendors training their models on customer data isn’t the only training-related risk of generative AI. Several AI vendors, including OpenAI, are currently being sued by artists, authors, and other copyright holders. Depending on how these lawsuits go, the vendors may have to change their business models or change their pricing structure in order to pay copyright owners — or possibly close up shop entirely.
In addition to lawsuits, there’s also a potential of regulatory action that might make certain kinds of training data off-limits. These risks could, potentially, extend to the enterprises using these products and services.
Companies should also ask vendors about their model training process, says Chandrasekara. “How transparent are they in their model training process?”
In particular, how do they make sure they’re not infringing on private data, he asks, and are there any legal actions against the company?
There’s another question enterprises can ask, he adds: “What kind of legal protection and legal indemnification do they provide to me as a customer?”
Several major vendors have already announced they’ll indemnify enterprise customers against the potential copyright risks associated with using their products. Microsoft, for instance, announced its legal indemnification policy for Copilot in September. If you’re challenged on copyright grounds, the company said, we’ll assume responsibility for the potential risks involved.
Google announced a similar policy in October, using almost identical wording, and
Adobe, which offers the Firefly image generation model, announced its own legal indemnification in June. Firefly is the model that powers the new generative fill feature in Photoshop and other Adobe products, and is also available as a standalone service. Getty, OpenAI, and Amazon quickly followed as well.
Do they have a moat?
When ChatGPT was first launched, it didn’t have the ability to read PDF documents, but the ability to analyze the content of a PDF is a major enterprise use case for generative AI. As a result, several start-ups sprung up to fill this gap in functionality.
In October, ChatGPT added a PDF upload functionality, making most of these start-ups irrelevant overnight. Enterprises that built PDF workloads using those start-ups’ technology now faced the risk that they’d go out of business before their customers could rebuild the systems.
This isn’t a new kind of problem, says Andy Thurai, VP and principal analyst at Constellation Research. A startup can easily become obsolete in any area of technology. “The difference is that the speed at which the AI models are releasing features is mind-boggling,” says Thurai. “With other software iterations it wasn’t that fast. It would take six months to a year.” That would give the smaller vendors time to innovate further, or give customers time to migrate.
He recommends enterprise customers approach their AI vendors with a “kill switch” philosophy, and not just because of the risk of them becoming obsolete.
There could be a management or organizational problem, like what happened at OpenAI, he says.
“And there’s a possibility some of these vendors can go bankrupt in no time,” he adds. “They might quickly burn through their cash and go belly up. Or one of their systems gets hacked and you don’t want to have your calls go through there anymore.”
To prepare themselves for that eventuality, enterprises should have a backup plan that allows them to continue to operate without that particular vendor.
“You have to have a kill switch option,” he says.
And a kill switch is more than just the technical ability to switch vendors without rebuilding an entire solution, says Nick Kramer, VP for applied solutions at SSA & Company. “It also includes the contractual ability to terminate the relationship.”
Enterprises also need to pay attention to how defensible a vendor’s product offerings are, says Sandeep Agrawal, legal technology and alliances leader at PricewaterhouseCoopers.
“A lot of companies put a thin wrapper around GPT-4 or Claude 2 and call it generative AI,” he says. “But what’s really there beneath that? And do they have the right skill sets in terms of engineering and governance?”
If a vendor isn’t adding much significant value, they’ll have a hard time staying in business, especially if their key feature is implemented by the AI platform itself, such as what happened with PDFs.
“Our legal team and procurement team have to understand and analyze PDF documents and contracts, some of which were signed 20 years ago,” he says.
So PricewaterhouseCoopers would benefit from a vendor offering the ability to read PDFs, but now it’s a standard feature and doesn’t need a separate vendor. Unless the vendor did something special. “For example, say they uploaded millions of contracts and understand the specific language of the contracts, and spent time and effort to train and fine-tune the model to get better responses to specific questions,” he says.
A generic foundation model would give generic answers to PDFs, he adds. That might work for a general business user, but not for someone in a very specific and technical domain. Doing this fine-tuning in-house would take a lot of time, he adds, since the speed to market is very important.
PricewaterhouseCoopers employs 4,000 lawyers, he says, and has a lot of proprietary data related to legal documents.
“If you have proprietary data, you can use it to create specialized domain models for contracts, legal research, litigation, and claims,” he says. “But if you try to build all of that by yourself, you won’t be successful in terms of speed to market. And that’s a big reason why we choose companies that have already done that.”
Vendors that specialize in, say, legal PDFs, financial PDFs, or those related to the pharmaceutical industry would still be able to provide value.
“Vendors need to understand the environment of their specific sector,” he says. “Can you create additional attributes, better user interfaces, and more friendly workflow?”
Model independence
In addition to looking for vendors that provide significant added value on top of the base foundational model they’re using, PricewaterhouseCoopers also chooses vendors that are flexible on the model they use.
“Twelve months ago, every vendor was focused on what ChatGPT was doing and building,” says Agrawal. “Now more of the established vendors are multi-model on the back end. They’re trying different foundation models for different things.”
Something could happen to a foundation model, or a better one might come along for a particular use case.
“If you’re not flexible and agile enough, your clients will move away,” he says.
There are now more than 200 foundation models, says Lian Jye Su, chief analyst for applied intelligence at tech consultancy Omdia.
“The vendor must have a deep understanding of the capabilities and technologies of the suitable foundation model,” he says. “And foundation models are prone to hallucination, so they must be grounded and linked with external vector databases.”
There are now more than 20 different hosted vector databases to choose from, he says, each with its own strengths. And it’s not just vendors who need to be flexible on what foundation model they use. Enterprises fine-tuning or training their own generative AI systems should also do everything they can to be model agnostic, says Gartner’s Chandrasekaran.
“The model they’re using today won’t be the model they’ll use 12 months down the line,” he says. “They need to have the ability to swap out those models.”
For enterprises that consume foundation models directly, they can build their systems so the API layer is isolated from the rest of the application. Then they can make the API call to the best model for the task, or swap out models completely when better or cheaper ones come along.
Another approach that some enterprises are looking at is to create AI orchestration layers that can span multiple systems and can hook into different cloud providers, different data sources, different foundation models, and even different enterprise software platforms.
“When you look at business flow, you need to look at it end-to-end,” says Ram Palaniappan, CTO at TEKsystems, a systems integrator. “It may start with Salesforce and end up in Oracle, but it needs to start with the user experience, and the end-to-end use case will drive how you tie those things together.”
There are multiple vendors offering these AI super-apps, he says, and the hyperscalers are also rolling out their own options.
LangChain is the best-known open source option in this space. Nvidia has a solution, and Meta has LlamaIndex, which is also gaining traction with enterprises, says Palaniappan.
“Some platform vendors, like Google, are building their own application layer,” he says. “They allow multiple foundation models, and they also integrate with LangChain as well.” Microsoft and AWS also have their own app builders, he adds.
It’s a good option for enterprises that are committed to a single cloud platform. “If you want to integrate on the app layer, a third-party super app will be a good choice,” he says. “Something like LangChain, which is portable across all three cloud platforms, but if the majority of your needs can be fulfilled by one hyperscaler, then you don’t need that.”
CIO, Data Privacy, Generative AI, IT Leadership, Vendor Management, Vendors and Providers
Read More from This Article: Weighing risk and reward with gen AI vendor selection
Source: News