No one would dispute that artificial intelligence (AI) is reimaging how businesses and entire industries operate.
Yet, as the hype around AI and machine learning intensifies, so does the number of AI buzzwords designed lure and distract. When having discussions with AI vendors, it’s easy to be enticed by terms such as “sentient AI”, “large language models”, “virtual copilot”, and others. But these terms are often used to camouflage a product’s limitations.
With FutureIT Chicago coming up on June 18, we spoke with event speaker and CEO of Graft, Adam Oliner, about how to identify misleading AI buzzwords and arm yourself with the right questions when assessing AI vendor claims.
Read on for Adam’s thoughts on cutting through the AI hype.
Adam, can you discuss the most common AI buzzwords that companies should watch out for?
Adam Oliner: “Sure, here are examples of the various types of AI buzzwords – although there will be new ones next week!
One class of AI buzzwords are simply terms that are undefined. Many are anthropomorphizing, such as sentient, conscious, understands, thinks, and hallucinates. Sometimes vendors use these terms to intentionally mislead; there are numerous connotations to these words that may or may not apply to AI.
Other times they’re used because a more precise word is unavailable. How should you describe an AI model generating something that disagrees with reality? Hallucination is the word people use, but the model is always generating a statistically plausible output, whether it gets it right or wrong, so calling it a hallucination is misleading. It’s always hallucinating … it’s just usually right.
But really, the problem with all these words is they have no technical definition. What does it mean for an AI to be conscious? I’m not even aware of an externally-testable definition of consciousness for living creatures, let alone for machines.
Other AI buzzwords are just sloppy analogies. Again, there’s something about the analogy that’s useful and true, and other parts that aren’t, but you’re left to guess which is which. Assistant, agent, and copilot are good examples. A real copilot is a full replacement for a human pilot. If the pilot dies, the copilot can take over completely. That’s absolutely not true for AI copilots.
Another class of AI buzzwords are specific technologies that almost always get subsumed quickly by new technologies. For example, at Graft we use the term ‘foundation model’ rather than ‘large language model [LLM]’ because LLMs were originally used for models that generate text. Foundation models are used for broader applications. So I predict we’ll hear the term LLM less and less. The same goes for RAG [retrieval augmented generation] and other temporary fixes for current AI deficiencies.”
What questions should companies ask AI vendors to accurately assess the capabilities and limitations of their products?
AO: “In general you should ask about practical considerations like scale, compliance, steerability, security, and business value.
But here are specific vendor questions:
Is this capability currently available in the product or is it on the roadmap? This weeds out companies that are marketing aspirationally. Some vendors get excited about quickly achieving a 20% solution and naïvely project that rate of progress out to 100%.
Could we see a live instance of this feature? What happens if you do X? Don’t be fooled by flashy demos that walk you down garden paths.
What do you mean by that term? Vendors play fast and loose with AI buzzwords, so don’t be shy about asking for clarification about a word or phrase. It’s not a sign that you don’t understand the technology. If anything, it’s a sign that you do.
What assumptions need to be true for that statement to hold? For example, a product might claim to increase something by XX%. That number comes from making assumptions. If those assumptions don’t hold true for you, don’t expect that the claim will, either.
Why is that the most important factor to consider? Are there other factors that matter? Vendors want you to use their rubric and metrics, because those are the ones they win with. But maybe your priorities are different. Maybe they left out a really important consideration because it’s a point of weakness for them.”
Adam, can you describe a shoddy AI implementation that was propped up by hype versus a genuine implementation with tangible benefits?
AO: “I’m loath to pick on anyone specific, so I’ll do it anonymously and also discuss my personal experience.
A few years ago, I failed to ask the right questions when purchasing a vehicle that boasted “full self-driving.” Today, that car still does not have that capability, according to my colloquial understanding of those words.
Where did I go wrong? I didn’t ask more questions about the claim. What does ‘full’ mean? For that matter, what does ‘self-driving’ mean when I’m required to be in the car, paying attention, with my hands on the wheel the whole time? I failed to ask whether it was currently available or even what timeline it was on. Not in the first couple of years after purchase, apparently!
In terms of shoddy implementations, chatbots can be a challenge. Lots of companies think you can fine-tune an LLM on your data and magically get a personalized AI chatbot. They spent spectacular amounts of money building bots only to see spectacular failures. Some bots have given customers crazy discounts, sold them a competitor’s product, given financial and legal advice, and so on.
The fact is, real AI solutions require real engineering. Any vendor promising quick, easy answers should be treated as suspect.
A positive experience with AI happens when the solutions are simple and trustworthy. Simple because most people aren’t AI experts and won’t alter their behavior just to use a new tool. Trustworthy because there’s typically already a way to solve the problem without AI, so if people are not getting better results quickly with AI, they will default to the old ways. Tools that improve existing workflows and automatically perform useful tasks that a person simply wouldn’t do otherwise, tend to be more successful.
AI isn’t all hype. It’s driving real business value. My biggest piece of advice is to evaluate AI like you would any other software solution.”
During his session at FutureIT Chicago 2024 on June 18, Adam Oliner will discuss AI marketing buzzwords and the right questions to ask AI vendors.
Read More from This Article: Don’t fall into the AI buzzwords trap when evaluating vendors
Source: News