In the ever-changing landscape of digital threats, artificial intelligence (AI) has emerged as both a formidable ally and a dangerous adversary. As we navigate the complexities of our interconnected world, it’s becoming increasingly clear that AI is not just a tool, but a force that’s reshaping the very nature of cybersecurity.
The cybersecurity world has changed dramatically. Gone are the days when simple firewalls and antivirus software could keep our digital assets safe. Today, we’re dealing with sophisticated threat actors who are leveraging AI to launch attacks at unprecedented scale and speed. For instance, in 2024, a troubling trend emerged where hackers used AI-powered tools to create highly convincing deepfakes, impersonating CEOs and other C-suite executives in 75% of such attacks.
At Synechron, we are prioritizing diligence through our payment process to ensure that we have appropriate approval authority including out of band validation of mid-large money transfers. As a secondary measure, we are now evaluating a few deepfake detection tools that can be integrated into our business productivity apps, in particular for Zoom or Teams, to continuously detect deepfakes.
Using AI in cybersecurity is like trying to play chess against a supercomputer — the game is familiar, but the opponent’s capabilities are on a whole new level. Luckily, we also have access to the supercomputer.
The AI advantage
How exactly is AI tipping the scales in favor of cybersecurity professionals? For starters, it’s revolutionizing threat detection and response. AI systems can analyze vast amounts of data in real time, identifying potential threats with speed and accuracy. Companies like CrowdStrike have documented that their AI-driven systems can detect threats in under one second.
But AI’s capabilities don’t stop at detection. When it comes to incident response, AI is proving to be a game-changer. Imagine a security system that doesn’t just alert you to a threat but takes immediate action to neutralize it. That’s the potential of AI-driven automated incident response. From isolating compromised systems to blocking malicious IP addresses, AI can execute these critical tasks swiftly and without human input, dramatically reducing response times and minimizing potential damage.
Perhaps one of the most anticipated applications of AI in cybersecurity is in the realm of behavioral analytics and predictive analysis. By leveraging machine learning algorithms, AI can analyze user behavior and network traffic patterns, identifying anomalies that might indicate insider threats or other malicious activities. These AI-driven insider threat behavioral analytics systems have been shown to detect 60% of malicious insiders under a 0.1% investigation budget and achieve full detection within a 5% budget in certain cases.
The dark side of AI
However, as with any powerful tool, AI is a double-edged sword. While it’s enhancing our defensive capabilities, it’s also being weaponized by cybercriminals to launch more sophisticated attacks. These AI-powered cyber-attacks are no longer a potential threat — they’re a very real and present danger.
For example, attackers recently used AI to pose as representatives of an insurance company. The email informed the recipient about benefits enrollment and included a form that needed to be completed urgently to avoid losing coverage and attempting to fool the receiver. AI can craft phishing emails like these, which are so convincing that even the most security-conscious user might fall for it. It can even create custom malware that can adapt and evolve to evade detection. These are the kinds of attacks that AI-enabled cybercriminals are now capable of producing. We’ve ended up in a cat-and-mouse game where both sides are constantly upping the ante.
The challenges don’t end there. As we increasingly rely on AI for our cybersecurity needs, we open these new AI tools to new vulnerabilities. Data poisoning and model manipulation are emerging as serious concerns for those of us in cybersecurity. Attackers can potentially tamper with the data used to train AI models, causing them to malfunction or make erroneous decisions.
There’s also the risk of over-reliance on the new systems. While AI is undoubtedly powerful, it’s not infallible. Becoming too dependent on AI for cybersecurity could lead to complacency and a false sense of security. We must remember that AI is a tool to augment human expertise, not replace it entirely.
The human factor
AI is not just changing the skill set required for cybersecurity professionals, it’s augmenting it for the better. The ability to work alongside AI systems, interpret their outputs, and make strategic decisions based on AI-generated insights will be paramount for both users and experts. While AI is improving at its cybersecurity capabilities, a human paired with an AI tool will outperform AI by itself ten-fold.
Our cyber team at Synechron plans to build and deploy our own AI accelerators as well as leverage Microsoft’s security co-pilot capabilities to augment our detection and security investigation of possible threats. However, this approach also requires human interaction to validate any findings or recommendations from AI to prioritize the remediations or responses that are required based on the criticality of the asset. In other words, humans are still required to interpret any business contextual information that AI might miss. This miss should not be understated as any discrepancy or incorrect analysis from AI could lead to detrimental loss or compromise. In addition, humans can also adapt to business contexts, and interpret changes or perceptions of potential loss or impact better than AI as AI is specifically programmed to achieve programmed outcomes.
As AI becomes more prevalent across organizations, there’s a growing need for a better understanding of data dependencies and asset management. Cybersecurity teams will need to reevaluate the relative importance of data assets, update inventories, and account for new threats and risks these AI systems might bring to their organizations.
The promise AI brings
Despite these challenges, the potential of AI in cybersecurity is truly exciting. Unlike traditional security solutions that can only rely on predefined rules, AI can learn from its environment and evolve its security protocols accordingly. And it’s this adaptability that will be crucial in a landscape where new threats are constantly emerging due to the very tools that are helping prevent them.
Looking ahead, the integration of AI with other emerging technologies like quantum computing or blockchain could lead to even more comprehensive security solutions. Picture a cybersecurity system that combines the processing power of quantum computing, the immutability of blockchain, and the adaptive intelligence of AI. This combination can create a highly robust defense system the likes of which we have not seen before.
The road ahead
As we look toward the future, it’s clear that AI will continue to play an increasingly central role in cybersecurity. In fact, 87% of IT professionals anticipate AI-generated threats will continue to impact their organizations for years to come, underscoring the need for continued innovation and vigilance. The key with AI will be striking the right balance — leveraging its strengths while mitigating the risks and limitations. It’s a challenge, certainly, but also an opportunity to build a safer, more secure digital world.
We need to invest in developing more robust and secure AI systems, ones that are resistant to manipulation and capable of explaining their decision-making processes. At the same time, we must continue to nurture human expertise, fostering a symbiotic relationship between human intuition and machine intelligence.
As we stand at this technological crossroads, one thing is clear: In the ongoing battle against cyber threats, AI is not just a tool — it’s the future of the entire battlefield.
As the Global CISO at Synechron, a leading global digital transformation consulting firm, Aaron Momin is accountable and responsible for cyber risk management, information security, crisis management and business continuity planning. Aaron has 30 years of experience in managing cyber and technology risk, improving security maturity and integrating privacy for global organizations. He is a certified CISO, CISM and CRISC.
Read More from This Article: AI and cybersecurity: A double-edged sword
Source: News