A finance manager joins a video call with familiar faces — the company’s CFO and her colleagues. They’ve been summoned via email for a confidential “M&A discussion” requiring bank transfers before the close of business. While the meeting isn’t outside the realm of possibility, the reality was just that: everyone aside from the manager was an AI-generated deepfake. Their voices, gestures and likenesses were synthesized from publicly available videos. The email and its provenance evaded system-wide detection.
This isn’t far-fetched; it was a real-life scenario for engineering company Arup in 2024, when deepfake impersonation cost the firm $25 million.
The availability of platforms that enable deepfakes-as-a-service demands that organizations adopt an AI-aware security posture. One that assumes AI-enhanced techniques are attempting to bypass traditional security systems and test the nature of human trust, escalating the AI arms race.
The result is creating a paradox for organizations with no clear long-term gains and a short-lived homefield advantage. The race is accelerating exponentially in both velocity and sophistication to the point that soon no human will be able to follow nor track AI attack/defense response cycles.
The threat of AI is here
At the start of this year, approximately two-thirds of global companies surveyed by the World Economic Forum stated that they anticipate that AI will have the “most significant impact” on cybersecurity.
Even as AI revolutionizes enterprise cybersecurity defenses, threats continue to evolve with sophistication and complexity. Unlike traditional malware, which may find its way into networks through a compromised software update or downloads, AI-powered threats utilize machine learning to analyze how employees authenticate themselves to access networks, including when they log in, from which devices, typing patterns and even mouse movements. The AI learns to mimic legitimate behavior while collecting login credentials and is ultimately deployed to evade basic detection.
The ante is raised with the deployment of text-to-video apps that can manipulate video streams, generate AI-videos and clone voices and human likeness. Real-world incidents highlight a critical gap: Building more resilient security requires additional layers to tilt the field in the direction of defenses. Ideally the additional layer is:
- Additive and based on a completely different approach (little overlap).
- Bypasses AI’s strengths, such as learning, manipulating, emulating, etc.
- Extremely efficient, fast, computationally cheap, authoritative.
Fortunately, such a mechanism already exists. Unfortunately, it is often overlooked or misunderstood. My thesis is that authentication, especially one based on open standards and tied to fundamental internet infrastructure, is a proven and effective defensive layer that helps address the AI challenge.
The changing battlefield: Hyperrealism by the numbers
The AI threat is already materializing. One only needs to try out OpenAI’s Sora2 to grasp the coming wave of hyper realistic spoofs that are making it nearly impossible to distinguish the real from the fake.
In the first five months of 2025 alone, there’s been a 1,265% jump in AI-powered phishing attacks, according to DeepStrike. Microsoft’s 2025 Digital Defense Report indicates that AI-powered phishing emails achieved a 54% click-through rate, compared to 12% for traditional phishing. Deepfakes, like voice cloning and video impersonation, are doubling in frequency every six months. In a recent Darktrace survey, 78% of CISOs now see AI-powered cyberthreats significantly affect their organizations.
Beyond the statistics, AI’s effectiveness is driven by its exponentially improving abilities to social engineer humans — replicating writing style, voice cadence, facial expressions or speech with subtle nuance and adding realistic context by scanning social media and other publicly available references.
The data is striking and reflects the crucial need for a multi-layer approach to help sidestep the exponentially escalating ability for AI to trick humans.
Here’s how a layered authentication strategy can change the outcome of an AI-powered attack:
- At the infrastructure level, DNS-based protocols verify that communications are actually coming from legitimate sources, operating on cryptographic principles rather than pattern recognition. Critically, this side steps hyperrealistic AI attacks.
- At the access level, security tokens, combined with biometric confirmation, create physical barriers.
- AI-powered behavioral analytics flag anomalies, like unusual location, access time or device.
Machine learning cannot forge DNS records for domains it doesn’t control, summon physical tokens or replicate fingerprints — at least not yet.
Authentication: A critical defense layer
As adversaries leverage AI for advanced phishing campaigns, deepfake attacks and automated vulnerability exploitation, various forms of authentication have evolved from best practices to strategic imperatives.
The numbers tell a compelling story: more than 99.9% of compromised accounts lack multi-factor authentication. When MFA is enabled, 96% of bulk phishing attempts and 76% of targeted attacks are deterred.
Yet despite authentication’s effectiveness, Okta’s global workforce data indicates approximately two-thirds of organizations worldwide deploy MFA, but just 35% of global SMBs and 27% to 34% of businesses with fewer than 100 employees use MFA, according to a Cyber Readiness Institute report. Moreover, Okta also found that government organizations had a 55% MFA adoption rate. The adoption gaps create significant supply chain vulnerabilities that attackers can exploit on a large scale.
Authentication vendors need to make deployment of their services easier, more intuitive, user-friendly and seamless. And MFA needs to be a requirement across the board, especially for executives, since they are the most targeted and can cause the most damage when breached.
But support must come from the top. When board and CEO-level executives actively champion MFA education and adoption, they give CIO/CISO/InfoSec teams the authority and organizational momentum needed to enforce it successfully.
Today, most organizations that roll out MFA broadly rely on established methods, adding verification layers to traditional password-based authentication:
- Time-based one-time passwords (TOTPs), which are generated through authenticator apps like Google Authenticator or Microsoft Authenticator and expire every 30 seconds. Software-generated codes eliminate the vulnerabilities of static credentials while remaining cost-effective and easy to deploy across large user bases.
- Biometric authentication, like facial recognition, one-touch fingerprint scanning or voice recognition, provides unique identifiers tied to individuals. Multi-modal biometric approaches offer stronger protection, particularly against AI-generated deepfakes. Passkeys can be used with biometrics, adding several additional layers of AI-resistant security.
- Push notifications send approval requests to registered devices, allowing users to confirm or deny authentication attempts with a single tap. While this method offers better usability than typing codes, it can be vulnerable to look-alike “prompt bombing” attacks aiming to overwhelm the target with requests.
- SMS-based codes, though similar in delivery to notifications, remain common despite known vulnerabilities such as SIM swapping and SMS interception.
By providing multiple fast and frictionless authentication layers that exist outside the digital realm where AI operates, networks become more resistant to phishing, session hijacking and man-in-the-middle attacks.
The next generation of MFA
Passkeys are emerging as the next critical evolution of defense. Passkeys are built on the mature FIDO2 and WebAuthn standards, which address a critical gap in current authentication methods.
Just as DNS-based protocols like DMARC establish trusted identity at the infrastructure level, FIDO2-based passkeys establish cryptographic trust at the user authentication level. By making the authentication mechanism itself incapable of working with fraudulent domains, passkeys offer a fundamental shift in authentication security. Passkey use can also be authenticated itself via biometrics (i.e., touch or face ID on Apple devices or Google’s face or fingerprint unlock for Android devices).
Cryptographic protection complements biometric authentication, which verifies “Is this the right person?” at the device level, while passkeys are used to verify “Is this the right website or service?” at the network level. Multi-modal biometrics, such as facial recognition plus fingerprint scanning or biometrics plus behavioral patterns, further strengthen this approach.
As AI-powered attacks make credential theft and impersonation attacks more sophisticated, the only sustainable line of defense is a form of authentication that cannot be tricked or must be cryptographically verified. With major platforms including Apple, Google, Microsoft and GitHub already supporting passkeys, this technology is quickly evolving from emerging to essential.
Balancing AI innovation with authentication modernization
The real opportunity is not choosing between AI-powered defenses and robust authentication; it is recognizing that non-AI authentication can fundamentally shift the security equation in favor of defenders. With average breach costs at $4.44 million, according to IBM’s 2025 Cost of a Data Breach Report, the path forward requires balancing both imperatives.
Success belongs to enterprises that recognize these technologies have fundamentally different roles. AI for detection, adaptation and response speed and non-AI authentication for definitive access control that cannot be algorithmically defeated.
But to truly change the equation, organizations must prioritize authentication modernization methods that are grounded in non-AI principles and open standards, even as they embrace AI-driven security innovations.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?
Read More from This Article: Authentication in the age of AI spoofing
Source: News

