Barely a year after the release of ChatGPT and other generative AI tools, 75% of surveyed companies have already put them to work, according to a VentureBeat report. But as the numbers of new gen AI-powered chatbots grow, so do the risks of their occasional glitches—nonsensical or inaccurate outputs or answers that are not easily screened out of the large language models (LLMs) that the tools are trained on.
In AI parlance, they’re called hallucinations. They don’t present big problems if you’re noodling around with gen AI prompts at home, but in enterprise organizations that are deploying new chatbots to huge numbers of customers and employees, just one AI fabrication can land companies in court.
Last spring, a judge sanctioned a law firm for citing judicial opinions with fake quotes and citations in a legal brief that a chatbot had drafted. The firm admitted that it “failed to believe that a piece of technology could be making up cases out of whole cloth.”
Hallucinations occur when the data being used to train LLMs is of poor quality or incomplete. The rate of occurrence runs between 3% and 8% for most generative AI platforms. “Chatbots are almost like a living organism in that they are continually iterating, and as they ingest new data,” says Steven Smith, chief security architect at Freshworks. “You get out what you put in.”
Chatbot missteps
With customer service chatbots, dispensing incorrect advice or information can undermine key objectives, such as customer satisfaction; they can also cause confusion and potential harm in highly complex (and regulated) sectors like healthcare or finance.
In IT organizations, gen AI glitches wreak havoc in other ways. Chatbots may assign service tickets incorrectly, describe a problem inaccurately, or disrupt workflows and lead to significant systemic issues—causing data breaches or misallocation of vital resources—that then require human intervention.
For engineers, AI-generated code used in software development may contain security vulnerabilities or intellectual property ingested during training. AI systems can also overlook complex bugs or security issues that only a developer would catch and resolve.
“Software copilots are fantastic, but you want to read and understand what they give you,” Smith says. “Blindly putting code into production because you believe it’s from an expert is no safer than copying it from StackExchange—the question and answer site once favored by coders in search of a specific snippet— if you have no idea what that code is doing.”
Minimizing risk
Many companies are starting to invest in mitigating risk. Here are some of the most effective strategies, according to experts.
- Deploy content filters. A variety of technical or policy-based guardrails can protect against inappropriate or harmful content. For example, content filters can decline to respond to questions about sensitive issues or topics. In customer-service scenarios, a chatbot should quickly hand off an inquiry to a human operator if it is confused or unable to track down the precise answer.
- Continually upgrade data quality. When training LLMs, IT teams should validate the data to ensure it is high quality, relevant, and comprehensive. Training data should be reviewed regularly to protect against “model drift” or the degradation of performance that occurs due to changes in the underlying data model over time.
- Security guardrails. Limiting the chatbots’ ability to connect to third-party apps and services eliminates the opportunity to generate misleading, inaccurate, or potentially damaging data. Side benefits of sandboxing the chatbot in this way are better performance (less dependencies) and enhanced compliance for those industries where that is essential.
Hallucinations may be a problem today, yet research is underway to solve it. In an effort to improve both accuracy and reliability, everything from building bigger models to having LLMs do the fact-checking themselves is being explored.
Ultimately, the best way to mitigate the risks of chatbot errors, Smith says, is to use common sense. “AI can be fantastic, but it needs to operate under your rules of engagement,” says Smith. “You want to define the things it can do, but also the things it cannot do, and ensure that it operates within those specific parameters.”
For more insights about innovating with AI, while minimizing the risks, visit The Works.
Generative AI
Read More from This Article: When your AI chatbots mess up
Source: News