The AI arms race
For years, cybersecurity has been a game of unequal effort. Attackers needed to be skilled, persistent, and dedicate weeks to reconnaissance. Defenders, in turn, built static fortresses, firewalls, antivirus software and long patch cycles.
That era is over.
Just as your organization is deploying AI to enhance defenses and drive efficiency, your adversaries are using the same technology to launch attacks that are faster, more personalized, and harder to detect. The attacker’s advantage, once measured in days, is now accelerating exponentially, compressing the window of time you have to detect and respond to threats. We are no longer facing human hackers; we are facing autonomous, AI-powered adversaries.
Here are the three primary shifts we are seeing in the threat landscape, and why your static defenses are now obsolete.
Threat 1: Hyper-personalized phishing & social engineering
Social engineering used to be the weak link in the defensive chain. Now, Generative AI has weaponized it.
The threat
Generative AI can craft highly convincing, contextually-aware phishing emails at massive scale. Instead of a generic “Your account has been suspended” email, an attacker’s agent can write a message that references a real business deal, uses a colleague’s known writing style, and is perfectly timed to a business event. The high level of customization significantly boosts its power, leaving victims virtually unable to respond in time.
The impact
This makes it almost impossible for humans to spot fraudulent emails, dramatically increasing the success rate of attacks. Recent research has shown that AI-powered phishing outperforms elite human red teams across all user skill levels, with the AI’s success rate improving by over 55% relative to humans in one study from 2023 to 2025.
Case in point: Deepfake CEO fraud
In 2020, criminals used an AI-generated voice clone of a CEO to instruct a manager at a multinational firm to wire a six-figure sum to a fraudulent account. The realism of the voice, coupled with the contextual urgency, successfully bypassed human skepticism, demonstrating how deepfakes and AI-driven impersonation have become an effective layer in social engineering campaigns.
Threat 2: AI-generated polymorphic malware
For years, security teams relied on signature-based detection, cataloging known threats and flagging matching patterns. The AI adversary is rewriting the rules of code itself.
The threat
Autonomous AI agents can create millions of unique malware variants in a matter of hours, making traditional signature-based detection systems obsolete. This polymorphic malware continuously changes its signature, file name, and encryption keys to stay undetected. With a simple prompt, a threat actor can generate working malicious payloads that are syntactically correct and highly obfuscated, defeating many traditional defenses that rely on known signatures.
The impact
This creates a moving target that is virtually impossible to defend against with static security tools, effectively making traditional antivirus solutions obsolete. This paradigm shift means defenders must move from signature matching to behavioral analysis and AI-based detection on their end.
Case in point: Real-world trends & research
Recent advances have enabled adversaries to develop AI models that modify malware behavior in subtle ways, allowing it to bypass established AI/ML-based detection systems such as Microsoft Defender or CrowdStrike. Compounding this challenge, modern malware can now leverage generative AI to dynamically create unique code paths, filenames, and API calls at runtime, making traditional signature and behavior-based defenses increasingly ineffective.
Threat 3: The automated reconnaissance agent
Every attack starts with reconnaissance, gathering information about systems, users, and infrastructure. Traditionally, this required a human attacker. But now, AI agents can automate and accelerate reconnaissance at scale, with alarming precision. AI has automated the entire first phase of the kill chain.
The threat
An attacker’s agent can autonomously crawl the internet, scan your organization’s attack surface, and identify vulnerabilities to exploit—all without direct human interaction. This involves enumerating external assets, scraping public records, identifying exposed ports, and mapping technologies, sometimes adapting mid-operation based on what it finds.
The impact
This lowers the barrier to entry for adversaries, allowing even less skilled hackers to launch sophisticated, targeted campaigns against high-value targets. Autonomous reconnaissance agents mean your systems can be mapped, indexed, and profiled faster than your team can respond, and if you only defend against human operators, you’re not defending against what’s really watching
Case in point: AI agent vulnerability
According to GBHackers, Salesforce experienced a critical AI security incident in 2025 where its Agentforce AI was successfully targeted by indirect prompt injection. Attackers were able to compromise the autonomous agent via ordinary data submissions, forcing it to carry out unauthorized commands and creating a risk of mass customer data exfiltration. This incident highlighted the vulnerability of deployed AI agents as a primary attack surface. Salesforce responded by patching the system and reinforcing its security posture by adding Trusted URLs Enforcement to all of its AI platforms to prevent data from being sent to untrusted destinations.
The mandate for proactive defense
The rise of the AI-powered adversary mandates a fundamental, strategic shift in your security posture. You can no longer build a static fortress. You must build a defense that is as dynamic and intelligent as the threats you face.
Your three-pronged strategy must be:
- AI-powered threat intelligence. You must use AI to detect and analyze adversary AI campaigns, focusing on predicting the next move instead of simply reacting to the last one.
- Continuous vulnerability management (CVM). Deploy your own AI agents to continuously scan your network for weaknesses. The best way to outpace an automated attacker is to autonomously remove the vulnerabilities they would exploit.
- Security for AI. The most robust defense against an AI-powered adversary is to have a secure and robust AI defense of your own, built on strong governance and Zero Trust principles for your own agents.
Your call to action
Your security roadmap needs a hard refresh. If you are still relying primarily on human-centric detection and static signatures, you are already behind.
Take the first step today:
- Schedule an emergency session with your security and risk teams to quantify the financial exposure from AI-enhanced social engineering and polymorphic malware.
- The future of cyber defense is machine vs. machine.
Are you ready for the fight?
This article is published as part of the Foundry Expert Contributor Network.
Want to join?
Read More from This Article: The AI-powered cyber adversary: Staying ahead of a new class of threat
Source: News

