This isn’t just a few individual bad actors; it’s a sophisticated, industrial-scale, state-sponsored threat that’s been simmering for the last two years and has now reached full boil. And like a frog in a pot, organizations are waking up to the potentially catastrophic risks of inaction. A 2024 Securonix survey found that concern about “malicious insiders” rose from 60% in 2019 to 74% in 2024, and 90% of companies believe insider attacks are “equally or more challenging to detect than external attacks.” The reason for this increase is clear when we look at a timeline of North Korean IT worker threat activity, which has been rapidly accelerating:
- May 2022: The FBI warns of attempts by North Korean IT workers to gain employment by posing as non-NK nationals, with the goal of funding North Korea’s weapons development.
- October 2023: Additional FBI guidance cites red flags for deepfake job candidates, such as an unwillingness to appear on camera and social media profiles that don’t match the person’s resume.
- May 2024: The Department of Justice announces arrests of US and foreign facilitators aiding North Korea in a scheme to breach Fortune 500 companies using stolen American identities, including a “top-5 national television network” and “premier Silicon Valley technology company.”
- June 2024: The Wall Street Journal interviews CEOs about bad actors using deepfakes to get hired into cybersecurity positions. One executive reports having stopped “over 50 candidates that were North Korean spies.”
- August 2024: Security firm KnowBe4 reveals that they unknowingly hired a North Korean spy. The threat actor used a deepfake profile photo and stolen identity data to impersonate a US citizen, and was only discovered after they tried to plant malware on their company-issued laptop.
The AI interview whisperer
According to Capterra’s 2024 Job Seeker AI Survey of 3,000 job seekers in 12 countries, 58% say they’re using AI in their current job search and 83% admit to using AI to “exaggerate or lie about their skills.” And more than one in four have used AI to generate interview answers. If legitimate job candidates are using these tools, it’s not hard to see their appeal for criminals.
For fraudsters, generative AI (genAI) is a free superpower. Software such as Interview Copilot or Sensei AI can not only help candidates prepare via mock interviews, but also generate tailored answers to interview questions in real time during a live interview. Many of these tools offer free versions or trials, with personalized answers, real-time speech recognition and instant translation of the language spoken by the interviewer. Some claim they are undetectable by interviewers and offer a hands-free experience.
Once they ace an interview using genAI, threat actors then use widely-available deepfake tools to create fake ID documents and profile photos, often using the personal information of real US citizens. In KnowBe4’s case, the attacker created a deepfake profile photo based on a stock image. By combining this deepfake with stolen personal information of a real US citizen, the threat actor got past the company’s hiring and background check procedures, including four video interviews and visual confirmation of his identity.
Enhance FBI guidance for effective prevention
Clearly, existing hiring security procedures are insufficient to handle this threat. Some companies are experimenting with other methods, but even these fall short. For example, one executive interviewed by the Wall Street Journal said that they ask candidates to show a photo ID on video, which needs to match their face. But deepfake counterfeit IDs, which are now capable of beating Know Your Customer (KYC) software, are more than good enough to pass as real on a grainy video call.
The FBI is continuing to evolve its guidance on North Korean IT workers. Interestingly, elements bear remarkable echoes to guidance issued against deepfake and social engineering threats in the healthcare industry. Organizations looking to implement the FBI, DOJ, and Department of Health’s guidance can go a step further by using automated identity verification tools to enhance visual identity checks.
How to avoid hiring North Korean spies
To effectively address this threat, enterprises need to combine awareness training with robust procedures and strong identity verification tools that are themselves resilient against deepfakes. Here are three specific measures that CIOs can take to protect their business.
- First, be alert for signs that a candidate might be using a genAI interview aide. For example, be on the lookout for a candidate who glances to the other side of their screen before answering a question, or who takes just a little bit too long to respond before providing suspiciously vague or repetitive answers. If you’re suspicious, try asking a simple personal (but not invasive) question that genAI would struggle with, such as, “What do you do for fun?”
- Second, implement strong identity verification (IDV) at new user account provisioning. No employee should be able to create their first password until you know for certain that they are who they should be. Look for an IDV system that uses factors which can’t be phished or fooled by deepfakes. Some solutions exist to automate this process entirely, which not only boosts security but can also save thousands of hours annually for IT and HR teams.
- Finally, consider reverifying your existing employees. Use a method that’s scalable, automated, and above all, trustworthy. While many security factors sound secure, simply sending a passcode to someone’s email or phone is insufficient: threat actors who are in your systems have already enrolled phone numbers and MFA devices they control.
As AI becomes more sophisticated, threat actors will continue faking their way through job vetting, and HR teams can’t go it alone. Don’t wait until it’s too late: organizations must take proactive steps to stop threat actors armed with genAI, deepfakes, and stolen identities, so that spies never even get through the virtual front door. By implementing stronger identity verification measures in employee recruitment and onboarding, IT and security leaders can not only prevent breaches but also save significant time and money for their teams and business.
Read More from This Article: Is your coworker a North Korean hacker? How AI impersonation is compromising the workforce
Source: News