Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

How Can Contact Centers Use AI-Powered Chatbots Responsibly?

Chatbots have been maturing steadily for years. In 2022, however, they showed that they’re ready to take a giant leap forward.

When ChatGPT was unveiled a few short weeks ago, the tech world was abuzz about it. The New York Times tech columnist Kevin Roose called it “quite simply, the best artificial intelligence chatbot ever released to the general public,” and social media was flooded with examples of its ability to crank out convincingly human-like prose.[1] Some venture capitalists even went so far as to say that its launch may be as earth shattering as the introduction of the iPhone in 2007.[2]

ChatGPT does indeed look like it represents a major step forward for artificial intelligence (AI) technology. But, as many users were quick to discover, it’s still marked by many flaws — some of them serious. Its advent signals not just a watershed moment for AI development, but an urgent call to reckon with a future that’s arriving more quickly than many expected.

Fundamentally, ChatGPT brings a new sense of urgency to the question: How can we develop and use this technology responsibly? Contact centers can’t answer this question on their own, but they do have a specific part to play.

ChatGPT: what’s all the hype about?

Answering that question first requires an understanding of just what ChatGPT is and what it represents. The technology is the brainchild of OpenAI, the San Francisco-based AI company that also released innovative image generator DALL-E 2 earlier this year. It was released to the public on Nov. 30, 2022, and quickly gained steam, reaching 1 million users within five days.

The bot’s capabilities stunned even Elon Musk, who originally co-founded OpenAI with Sam Altman. He echoed the sentiment of many people when he called ChatGPT’s language processing “scary good.”[3]

So, why all the hype? Is ChatGPT really that much better than any chatbot that’s come before? In many ways, it seems the answer is yes.

The bot’s knowledge base and language processing capabilities far outpace other technology on the market. It can churn out quick, essay-length answers to seemingly innumerable queries, covering a vast range of subjects and even answering in varied styles of prose based on user inputs. You can ask it to write a resignation letter in a formal style or craft a quick poem about your pet. It churns out academic essays with ease, and its prose is convincing and, in many cases, accurate. In the weeks after its launch, Twitter was flooded with examples of ChatGPT answering every type of question users could conceive of.

The technology is, as Roose points out, “Smarter. Weirder. More flexible.” It may truly usher in a sea of change in conversational AI.[1]

A wolf in sheep’s clothing: the dangers of veiled misinformation 

For all its impressive features, though, ChatGPT still showcases many of the same flaws that have become familiar in AI technology. In such a powerful package, however, these flaws seem more ominous.

Early users reported a host of concerning issues with the technology. For instance, like other chatbots, it quickly learned the biases of its users. Before long, ChatGPT was spouting offensive comments that women in lab coats were probably just janitors, or that only Asian or white men make good scientists. Despite the system’s reported guardrails, users were able to train it to make these types of biased responses fairly quickly.[4]

More concerning about ChatGPT, however, are its human-like qualities, which make its answers all the more convincing. Samantha Delouya, a journalist for Business Insider, asked it to write a story she’d already written — and was shocked by the results.

On the one hand, the resulting piece of “journalism” was remarkably on point and accurate, albeit somewhat predictable. In less than 10 seconds, it produced a 200-word article fairly similar to something Delouya may have written, so much so that she called it “alarmingly convincing.” The catch, however, was that the article contained fake quotes made up by ChatGPT. Delouya spotted them easily, but an unsuspecting reader may not have.[3]

Therein lies the rub with this type of technology. Its mission is to produce content and conversation that’s convincingly human, not necessarily to tell the truth. And that opens up frightening new possibilities for misinformation and — in the hands of nefarious users — more effective disinformation campaigns.

What are the implications, political and otherwise, of a chatbot this powerful? It’s hard to say — and that’s what’s scary. In recent years, we’ve already seen how easily misinformation can spread, not to mention the damage it can do. What happens if a chatbot can mislead more efficiently and convincingly?

AI can’t be left to its own devices: the testing solution

Like many reading the headlines about ChatGPT, contact center executives may be wide-eyed about the possibilities of deploying this advanced level of AI for their chatbot solutions. But they first must grapple with these questions and craft a plan for using this technology responsibly.

Careful use of ChatGPT — or whatever technology comes after it — is not a one-dimensional problem. No single actor can solve it alone, and it ultimately comes down to an array of questions involving not only developers and users but also public policy and governance. Still, all players should seek to do their part, and for contact centers, that means focusing on testing.

The surest pathway to chaos is to simply leave chatbots alone to work out every user question on their own without any human guidance. As we’ve already seen with even the most advanced form of this technology, that doesn’t always end well.

Instead, contact centers deploying increasingly advanced chatbot solutions must commit to regular, automated testing to expose any flaws and issues as they arise and before they snowball into bigger problems. Whether they’re simple customer experience (CX) defects or more dramatic information errors, you need to discover them early in order to correct the problem and retrain your bot.

Cyara Botium is designed to help contact centers keep chatbots in check. As a comprehensive chatbot testing solution, Botium can perform automated tests for natural language processing (NLP) scores, conversation flows, security issues, and overall performance. It’s not the only component in a complete plan for responsible chatbot use, but it’s a critical one that no contact center can afford to ignore.

Learn more about how Botium’s powerful chatbot testing solutions can help you keep your chatbots in check and reach out today to set up a demo.

[1] Kevin Roose, The Brilliance and Weirdness of ChatGPT, The New York Times, 12/5/2022.

[2] CNBC. “Why tech insiders are so excited about ChatGPT, a chatbot that answers questions and writes essays.”

[3] Business Insider. “I asked ChatGPT to do my work and write an Insider article for me. It quickly generated an alarmingly convincing article filled with misinformation.”

[4] Bloomberg. “OpenAI Chatbot Spits Out Biased Musings, Despite Guardrails.”

Artificial Intelligence, Machine Learning


Read More from This Article: How Can Contact Centers Use AI-Powered Chatbots Responsibly?
Source: News

Category: NewsJanuary 10, 2023
Tags: art

Post navigation

PreviousPrevious post:5 hot IT hiring trends — and 5 going coldNextNext post:CIO Leadership Live with Veneeth Purushotaman, Group Chief Information Officer, Aster DM Healthcare

Related posts

피그마 CPO “개발자 업계의 변화, 디자이너도 겪게 될 것···AI는 곧 필수 도구”
May 28, 2025
LG CNS, 생성형 AI 기반 개발 체계 전면 도입…AX 프로젝트 생산성 혁신
May 28, 2025
퓨어스토리지-SK하이닉스, 고성능·에너지 효율 데이터센터 솔루션 개발
May 28, 2025
New cybersecurity threats test the mettle of financial services CIOs
May 27, 2025
Salesforce to buy Informatica in $8 billion deal
May 27, 2025
Digital leadership in a divided world: 2025 CIO and CTO priorities by region
May 27, 2025
Recent Posts
  • 피그마 CPO “개발자 업계의 변화, 디자이너도 겪게 될 것···AI는 곧 필수 도구”
  • LG CNS, 생성형 AI 기반 개발 체계 전면 도입…AX 프로젝트 생산성 혁신
  • 퓨어스토리지-SK하이닉스, 고성능·에너지 효율 데이터센터 솔루션 개발
  • New cybersecurity threats test the mettle of financial services CIOs
  • Salesforce to buy Informatica in $8 billion deal
Recent Comments
    Archives
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.