Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

When voice deepfakes come calling

Intro: Time was, a call center agent could be relatively secure in knowing who was at the other end of the line. And if they weren’t, multi-factor authentication (MFA), answers to security questions, and verbal passwords would solve the issue.

Those days are behind us, as deepfake audio and video are no longer just for spoofing celebrities. Voice deepfakes – in which a real person’s voice is cloned from recorded snippets of their voice – are one of the biggest risks facing modern businesses and their call centers.

Deepfake fraud attacks surged 3,000% last year, and unlike email phishing, audio and video deepfakes don’t come with red flags like spelling errors or strange links. A recent survey showed that 86% of call centers surveyed are concerned about the risk of deepfakes, with 66% lacking confidence that their organization could identify them.

How fraudsters use audio deepfakes

1. Navigating IVR

According to an analysis of call center deepfake attacks, a primary method favored by fraudsters is using voice deepfakes to successfully move through IVR-based authentication.

Fraudsters also had the answers to security questions and, in one instance, knew the account holder’s one-time password. Often, bots are involved in this process. Once the bot has achieved IVR authentication, it can obtain basic information like the bank balance to determine which accounts to mark for further targeting.

2. Changing account or profile details

By cloning customers’ voices,scammers are able to dupe call center agents into changing the emails, home addresses, and phone numbers associated with the accounts, which then enables them to do everything from accessing customers’ one-time passwords to ordering new checks or cards.

This method of account takeover (ATO) is becoming more common as attackers attempt to bypass existing security measures. In a recent TransUnion survey of call center organizations, nearly two-thirds of financial industry respondents claim the majority of ATOs originate in the call center.

3. Social engineering attacks

Deepfakes significantly enhance the effectiveness of social engineering by making it much harder to distinguish bad actors from legitimate customers. A recent report found that fraudsters are not always trying to bypass authentication. Instead, they use a “basic synthetic voice to figure out IVR navigation and gather basic account information.” Once this is achieved, the threat actor calls using their own voice to social engineer the agent.

Contact centers that use video verification calls might assume they’re safe, but fraudsters can now stream live deepfake video feeds that are indistinguishable from the real thing. These gangs operate globally: case in point, the Central Investigation Bureau (CIB) of Thailand issued a warning about call center gangs that are using AI to create deepfake videos, cloning faces and voices with alarming accuracy.

Why are contact centers vulnerable?

Today’s deepfakes are so good that they’re virtually indistinguishable from reality. Generative AI advancements have made it shockingly simple to quickly and realistically emulate the tone and likeness of someone’s voice, often for free.

A quick Google search finds multiple sites offering “free AI voice cloning” in 60 seconds or less. All you need is a short recording of the person’s voice. In the case of Open AI’s voice cloning tool, Voice Engine, just 15 seconds of audio is sufficient. And with so many homemade videos on social media, it’s not difficult to find that few seconds of someone’s voice online.

Contact center agents tend to believe they’re not a target, yet research indicates that call center attacks are rising. According to the TransUnion survey, 90% of financial industry respondents reported an increase in call center fraud attacks, with one in five of them claiming attacks are up more than 80%.

Yet, most contact centers lack effective tools to differentiate between fraudsters and real customers. Importantly, agents are also not aware of just how realistic deepfakes can be.

Double-jeopardy: fraudsters impersonating agents

Car dealership software provider CDK Global recently suffered two cyberattacks that caused a shutdown of its systems and disrupted car dealerships, who rely on CDK’s software for everything from inventory to financing. In the wake of this security breach, threat actors called CDK customers posing as CDK support agents to try to gain system access.

This sort of attack is a novel evolution of a traditional vishing attack. Or the classic “Microsoft support scam” in which threat actors claiming to be from Microsoft support call customers and offer to “fix” nonexistent issues with their device, often gaining access to the customer’s computer and personal data in the process.

How to protect against deepfakes

1. Education

There’s nothing like a human voice on the other end of the line; not only can an agent empathize and calm anxious callers, but they can do a better job compared to bots at telling the difference between live, authentic human voices and deepfakes.

But to be effective, agents need to learn how to spot signs of social engineering, such as creating a false sense of urgency, and how to identify synthetic voices.

2. Process

When people hear about a deepfake attack, they sometimes call it a failure of process. Yet at the end of the day, processes are only as effective as the tools they use. Contact centers must implement strong caller verification processes built on tools that mitigate the risks of deepfake attacks and social engineering.

3. Going beyond status-quo approaches

Contact center agents aren’t cybersecurity experts, and they shouldn’t have to be. Education is important, but agents shouldn’t have to rely on their own ears to detect voice deepfakes. Contact centers need to equip agents with the best tools to do their job.

While AI-powered identity verification technologies can detect AI-generated voices, images, and videos in real time, companies cannot rely solely on AI to detect AI. That’s because deepfakes are now so good, many identity verification (IDV) tools are falling victim.

Over-reliance on MFA is also a mistake, as sending a passcode doesn’t tell you who’s on the other end of the phone. Calls can be intercepted, or the fraudster could be talking to the actual customer and the call center agent simultaneously: while tricking the victim into entering the one-time passcode.

Similarly, placing too much trust in voice biometrics (VB) can leave you vulnerable. While VB providers are working hard to add liveness checks and deepfake detection into their products, the fight against deepfakes is an “AI arms race” that, in many cases, the attackers are winning.

Instead, organizations should look for an approach to IDV that stops deepfakes before they can even be used. TransUnion’s report emphasized the importance of stopping bad actors before they reach the call center or IVR system, with 70% of all survey respondents and nearly 67% of financial industry respondents agreeing that caller authentication should start prior to any contact with the call center agent.

Advanced cybersecurity technology is needed that incorporates mobile cryptography, machine learning, and advanced biometric recognition alongside AI. This combination of tools can serve as a “surround sound” approach for call center security that strengthens agents’ guard against deepfakes by preventing the authentication of impersonators at the outset.

Given the reliance on call centers for so much of today’s customer service, it is imperative that companies prioritize the adoption of advanced cybersecurity tools and technologies sooner rather than later to protect consumers, their business, and their reputation.


Read More from This Article: When voice deepfakes come calling
Source: News

Category: NewsDecember 19, 2024
Tags: art

Post navigation

PreviousPrevious post:아마존 마케터가 전하는 ‘이력서 작성에의 생성형 AI 활용법’NextNext post:Is SaaS dead? Not quite, but it’s evolving rapidly

Related posts

휴먼컨설팅그룹, HR 솔루션 ‘휴넬’ 업그레이드 발표
May 9, 2025
Epicor expands AI offerings, launches new green initiative
May 9, 2025
MS도 합류··· 구글의 A2A 프로토콜, AI 에이전트 분야의 공용어 될까?
May 9, 2025
오픈AI, 아시아 4국에 데이터 레지던시 도입··· 한국 기업 데이터는 한국 서버에 저장
May 9, 2025
SAS supercharges Viya platform with AI agents, copilots, and synthetic data tools
May 8, 2025
IBM aims to set industry standard for enterprise AI with ITBench SaaS launch
May 8, 2025
Recent Posts
  • 휴먼컨설팅그룹, HR 솔루션 ‘휴넬’ 업그레이드 발표
  • Epicor expands AI offerings, launches new green initiative
  • MS도 합류··· 구글의 A2A 프로토콜, AI 에이전트 분야의 공용어 될까?
  • 오픈AI, 아시아 4국에 데이터 레지던시 도입··· 한국 기업 데이터는 한국 서버에 저장
  • SAS supercharges Viya platform with AI agents, copilots, and synthetic data tools
Recent Comments
    Archives
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.