Conversational AI is changing the way we do business.
In 2018, IBM boldly declared that chatbots could now handle 80% of routine customer inquiries. That report even forecasted that bots would have a 90% success rate in their interactions by 2022.[1] As we survey the landscape of businesses using conversational AI, it appears to be playing out that way.
Not many customers are thrilled with these developments, however. According to recent research by UJET, 80% of customers who interacted with bots reported that it increased their frustration levels. Seventy-two percent even called it a “waste of time.”[2]
While it’s true that chatbots and conversational IVR systems have made significant strides in their ability to deliver quality service, they still come with serious limitations. Most notably, they tend to take on the biases of their human designers — sometimes even amplifying them. If contact center leaders want to rely heavily on this technology, they can’t ignore this issue.
What is chatbot and conversational AI bias?
At first glance, the idea of a computer holding biases may seem paradoxical. It’s a machine, you might say, so how can it have an attitude or disposition for or against something?
Remember, though, that artificial intelligence is created by humans. As such, its programming reflects its human creators — including any of their biases. In many cases, those biases may even be amplified because they become deeply encoded in the AI.
There have been a few extreme (and well-known) examples of this. Microsoft’s chatbot, Tay, was shut down after only 24 hours when it started tweeting hateful, racist comments. Facebook’s chatbot, Blender, similarly learned vulgar and offensive language from Reddit data.
As disturbing and important as those extreme examples are, they overshadow the more pervasive and persistent problem of chatbot bias. For instance, the natural language processing (NLP) engine that drives conversational AI often does quite poorly at recognizing linguistic variation.[3] This regularly results in bots not recognizing regional dialects or not considering the vernacular of all the cultural and ethnic groups that will use chatbots.
More subtle is the tendency of chatbots and other forms of conversational AI to take on female characteristics, reinforcing stereotypes about women and their role in a service economy.[4] In both cases, it’s clear that these bots are mirroring biases present in their human authors. The question is: what can be done about it — especially at the contact center level?
Confronting the problem
Many of the solutions for chatbot bias lie in the hands of developers and the processes they use to build their chatbots. Most importantly, development teams need a diverse set of viewpoints at the table to ensure those views are represented in the technology.
It’s also crucial to acknowledge the limitations of conversational AI and build solutions with those limitations in mind. For instance, chatbots tend to perform better when their sets of tasks aren’t so broad as to introduce too many variables. When a bot has a specific job, it can more narrowly focus its parameters for a certain audience without risking bias.
Developers don’t operate in a vacuum, though, and it’s critical to consider the end user’s perspective when designing and evaluating chatbots. Customer feedback is an essential component of developing and redesigning chatbots to better eliminate bias.
An effective approach for fine-tuning chatbot algorithms involves all the above — and more. To accelerate the process and dig deeper, you need to harness the power of AI not only for building chatbots but for testing them.
Digging deeper to uproot bias
These aren’t the only ways to teach bots to do better, though. One of the most effective options is to let AI do the work for you. In other words, instead of only waiting for diverse perspectives from your development team or customers, why not be proactive to uproot bias by throwing diverse scenarios at your bots?
An effective conversational AI testing solution should be able to perform a range of tests that help expose bias. For instance, AI allows you to add “noise” to tests you run for your conversational IVR. This noise can be literal, but it can also include bias-oriented changes such as introducing the IVR to different accents, genders, or linguistic variations to see if it responds appropriately.
On the chatbot side, AI enables you to test your bots with a wide array of alternatives and variations in phrasing and responses. Consider the possibilities, for instance, if you could immediately generate a long list of potential options for how someone might phrase a request. These might include the simple rephrasing of a question or paraphrased versions of a longer inquiry. Armed with these alternatives, you could then test your bot against the ones with the most potential for a biased reaction.
Testing can take you even further in your quest to mitigate bias. Training data is one of the most critical components for teaching your bot to respond appropriately, and you can use NLP testing to analyze the training data you’re using and determine whether it’s instilling bias in your chatbots. You can even use AI-powered test features to expand the available set of test data to bring more diverse conversational angles to the table. In effect, this allows you to diversify your bot’s perspective even if your development team isn’t yet as diverse as it could be.
AI-powered testing solutions are capable of these types of tests — and more. And, when you use AI, you rapidly accelerate your capacity for testing your conversational AI systems, whether for biases or many other issues.
You don’t have to wait until you’ve assembled the perfect team of developers or accumulated a diverse set of customer data to weed out bias in your chatbots and conversational IVR. Cyara Botium’s AI-powered testing features can help you get started right away. Take a look at our Building Better Chatbots eBook to learn more.
[1] IBM. “Digital customer care in the age of AI.”
[2] Forbes. “Chatbots And Automations Increase Customer Service Frustrations For Consumers At The Holidays.”
[3] Oxford Insights. “Racial Bias in Natural Language Processing.”
[4] UNESCO. “New UNESCO report on Artificial Intelligence and Gender Equality.”
Artificial Intelligence, Machine Learning
Read More from This Article: The Secret to Mitigating Bias in Your Chatbots and Conversational IVR Systems
Source: News