In London, becoming a licensed cab driver used to require passing an exam called “The Knowledge.” Candidates spent three to four years memorizing 25,000 streets, 100,000 landmarks and thousands of optimal routes. Neuroscience researchers at University College London found that cabbies who passed had measurably enlarged hippocampi from the cognitive load.
GPS made the entire achievement irrelevant in a single software update. Not gradually. Not partially. A driver on their first day with a nav app could match a cabbie who had studied for four years. The skill did not get cheaper. It stopped mattering.
That same structural collapse just happened to cyberattack expertise.
The skill floor fell through the floor
For two decades, the most dangerous attack techniques were gated by skill and time. Adversary-in-the-middle phishing, polymorphic malware, living-off-the-land scripting, autonomous exploit development — nation-state groups ran these operations because they alone had practitioners who could execute them.
AI removed the gate. The same way GPS never taught anyone cartography — it made cartography optional.
IBM X-Force quantified one dimension: AI generates convincing phishing lures in five minutes versus sixteen hours for an experienced human operator. That’s a 192x reduction in time cost for a single task. Multiply it across reconnaissance, lure generation, payload evasion and exploit development, and you get a capability transfer from specialized actors to anyone motivated enough to open a Telegram channel. CrowdStrike’s 2026 Global Threat Report documented the result: An 89% year-over-year surge in AI-augmented attacks, alongside a 29-minute average eCrime breakout time — 65% faster than 2024.
Three techniques show how completely the collapse ran.
Adversary-in-the-middle phishing once required an operator who understood reverse proxy architecture, SSL certificate management and session token mechanics. Platforms like Tycoon 2FA packaged all of that into a browser dashboard with tiered pricing and customer support. The required skill dropped to “credit card and intent.” The result: 40,000 AiTM incidents daily across Microsoft environments, and 84% of compromised accounts had MFA enabled. The authentication was genuine. The theft happened after it succeeded.
AI spear phishing once required a skilled analyst spending two to four hours per target. AI automated the entire pipeline — LinkedIn scraping, lure generation, style-matching — producing messages with zero grammatical errors that reference real projects and mimic specific colleagues. A 2025 campaign targeted 800 accounting firms simultaneously with emails referencing each firm’s specific state registration details and hit a 27% click rate. Running 800 firm-specific, research-backed campaigns at once was previously not operationally feasible below nation-state level.
Autonomous exploit development may be the starkest case. Anthropic’s Mythos model demonstrated fully autonomous discovery and exploitation of unknown vulnerabilities — independently finding a 17-year-old remote code execution flaw in FreeBSD’s NFS server that human researchers had missed for years. Cost: under $20,000. That replaced months of nation-state research effort.
Eight major attack categories show the same pattern across 2025 and 2026 data. The skill that gated each attack stopped being required.
The auto-tune problem
Auto-tune didn’t make singers cheaper to hire. It made pitch control irrelevant. A tone-deaf performer with the plugin produces the same output as a conservatory graduate. The listener cannot tell the difference.
That’s the detection problem in one sentence.
Traditional defenses work by finding a signal: A known malicious hash, a grammar error in the lure, a failed authentication attempt. AI lets attackers strip those signals out. AiTM removes failed logins. AI-generated lures remove grammatical errors. Polymorphic malware removes stable code signatures. Automated reconnaissance removes advance warning entirely — it runs in public data sources the target cannot monitor.
The attack that succeeds now is the one designed to look completely normal. Pattern-matching fails when the patterns have been intentionally removed.
The architecture was built for a world that no longer exists
The defense stack most organizations run rests on three assumptions that held for two decades and are now false.
First, that sophisticated attacks are rare. They’re not — volume now scales to commodity levels. Second, that attacks contain detectable quality signals. They don’t — the absence of awkward phrasing or mismatched domains isn’t exculpatory. It’s the attack working as designed. Third, that human investigation speed is fast enough. A 29-minute breakout time and a 21-second average time-to-click leave no margin for a 15-minute triage cycle.
These weren’t bad assumptions when architects made them. But the architecture built on top of them doesn’t degrade gracefully when they fail. It fails structurally.
What still works — and why
The controls that survive share one trait: They depend on properties attackers cannot strip from the signal.
FIDO2 security keys bind authentication cryptographically to the legitimate origin domain. When an AiTM proxy intercepts the flow, the challenge comes from the proxy’s domain. The key refuses to sign. No AI-generated polish changes the domain mismatch at the cryptographic layer. Deploy it for all privileged accounts and disable fallback to phishable MFA methods — Proofpoint has already documented FIDO2 downgrade attacks in Microsoft Entra.
But hardware controls address only the front door. The deeper fix is a different detection philosophy: Reasoning about what the attacker is trying to accomplish rather than what the attack looks like. In January 2026, a mid-market financial firm caught an active AiTM operation before any payment moved. Their pipeline correlated an email click, a new-IP authentication and an inbox rule creation within a 90-second window — flagging the sequence as a single credential-theft operation. Their legacy email gateway evaluated the same email and generated no alert. SPF, DKIM and DMARC all passed. The link resolved to a legitimate SharePoint domain. The difference wasn’t a better product. It was a better question: One system asked what the email looked like; the other asked what the attacker was trying to accomplish.
That’s the architecture shift — from “does this match a known threat pattern” to “is this sequence of actions consistent with credential theft, regardless of what the initial email looked like.” Most SOCs present those as four unrelated alerts triaged by different analysts. The attacker’s operational logic is more coherent than the defender’s detection pipeline.
The capability transfer is permanent
London didn’t rebuild its transportation system assuming most drivers still couldn’t navigate. It accepted the collapse and adapted. The cabbies who survived stopped competing on memorization and shifted to what GPS couldn’t replicate: Judgment, local knowledge, reading the situation in real time.
The security equivalent is the same pivot. Stop competing on pattern recognition — the skill AI just made irrelevant for both sides — and shift to what attackers cannot automate away: Understanding what normal looks like inside your specific organization, connecting signals across kill chain stages, and reaching a verdict at machine speed.
The Knowledge took four years to master. One software update made it obsolete. The question for security leaders isn’t whether the same thing happened to APT tradecraft. The data says it did. The question is whether your architecture still assumes it didn’t.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?
Read More from This Article: It took 4 years to master ‘The Knowledge.’ AI just collapsed it in a software update
Source: News

