The past year was a whirlwind for CIOs and CISOs, marked by the rapid expansion of enterprise AI, persistent cyber threats, and the growing menace of deepfakes. Added to this tumult were emerging threats from activist hackers, who found innovative ways to infiltrate corporate data systems, banking networks, and social media platforms.
Throw in bad actors capitalizing on a heated political climate into this mix, and that’s a lot of challenges for any CIO or CISO to handle. Yet, there are likely more challenges to come. As I write this, the world is learning about DeepSeek, the new advanced AI model developed by High-Flyer, a Chinese hedge fund. The open-source advanced AI architecture has already been attacked and is also being viewed as a conduit for new data exploitations and cybersecurity attacks.
AI in enterprise
2024 witnessed unprecedented growth in enterprise AI, transforming far beyond chatbots and automated support. Cloud providers such as Microsoft, Google, and AWS heavily invested in AI infrastructure, as did venture capital funds, producing a wide range of solutions for enterprises to jump in “feet first” with AI apps that automate different critical tasks, with data agents leading the way. Other uses for enterprise AI included data collection, analysis, customer service, and risk management.
AI-powered tools like ChatGPT, Canva, Gemini, and Copilot dominated the consumer landscape, introducing text-to-image, text-to-video, and voice synthesis capabilities. While these advancements were revolutionary, they also allowedbad actors to exploit generative technologies for fraud. These initiatives now stand to be seriously challenged by the recent launch of DeepSeek, adding a whole new layer of possibilities for exploitation.
Ongoing and entirely new cyber threats
Cyber threats remained relentless in 2024, from traditional identity fraud to sophisticated AI-driven scams. A startling case involved North Korean IT workers using stolen U.S.
identities to secure high-paying jobs, exposing vulnerabilities in corporate hiring processes.
In the financial sector, fraud challenges included AI-generated scams and deepfakes. Mitek Systems’ Identity Intelligence Index 2024 study revealed that over 40% of banks faced fraud risks during customer onboarding, highlighting the urgent need for robust identity verification solutions.
Enterprises increasingly turned to AI-native security solutions, employing continuous multi-factor authentication and identity verification tools. These technologies monitor behavioral patterns or other physical world signals to prove identity —innovations that can now help prevent incidents like the North Korean hiring scheme.
However, hackers may now gain another inside route to enterprise security. The new breed of unregulated and offshore LLMs like DeepSeek creates new opportunities for attackers. In particular, using DeepSeek’s AI model gives attackers a powerful tool to better discover and take advantage of the cyber vulnerabilities of any organization.
The model presents another straightforward means to generate new attacks, including producing deepfakes that spawn more dangerous ransomware, theft, and fraud. Biometric Update notes that “DeepSeek’s ability to process and analyze massive datasets in real-time makes it a formidable tool for identifying vulnerabilities in complex systems. Traditional cyberattacks rely on manually identifying weak points in networks, software, or infrastructure. DeepSeek, however, can automate this process at unprecedented speed and scale. For example, it could scan millions of endpoints, IP addresses, and cloud services globally, using pattern recognition and anomaly detection to pinpoint exploitable weaknesses. This capability significantly reduces the time and resources required to plan and execute sophisticated cyberattacks.”
The rise of Deepfakes
Deepfake technology continues to blur the lines between reality and fiction. According to a Deloitte study on deepfakes and banking fraud, financial losses are expected to surge from $12.3 billion in 2023 to $40 billion by 2027. Unlike traditional phishing, AI-generated audio and video have become alarmingly authentic, making detection difficult.
One notable case this past February involved a Hong Kong employee duped into transferring $25 million during a Zoom call featuring deepfake avatars of company executives. According to this post, “An unsuspecting employee based in Hong Kong received an email purportedly from the company’s CFO, requesting a significant financial transaction. Upon expressing skepticism, the employee was lured into a Zoom call involving multiple supposed company executives, including the CFO. The trick? All of the participants on the call were live video deepfakes. The unsuspecting worker transferred $25 million to five bank accounts in 15 transactions. The scam was only identified days later, when the employee became concerned and checked with the corporate head office.” Such incidents underscore the need for heightened vigilance and advanced fraud detection tools.
This was an alarming wake-up call for many security professionals. Voice deepfakes, which recreate a human’s voice using their voice samples, are a massive risk for modern businesses, primarily via their call centers. According to a recent CIO article, deepfake phishing attempts on security systems are spiking rapidly.
Brand fakes
Another fraud problem that arose and will likely continue is the mix of brand and people fakes. These voice-visual fakes confound and confuse users and organizations alike.
During the 2024 election cycle, we saw several deepfakes depicting high-profile figures, including Democratic candidates Harris and Walz, Republican candidates Trump and Vance, and other election-connected figures like Elon Musk and Vivek Ramaswamy.
AI solutions have now been developed to help block malicious websites that try to impersonate a personal or a brand’s site. They can also locate social media-based brand fakes and content impersonating top company executives, helping speed up imposter takedowns and restore a brand’s rightful identity.
In the year ahead, we can also expect to see attempts by bad actors to attack more prominent brands via third-party access attempts and supply chain attacks, making it ever more imperative for companies to be ultra-vigilant about their third-party relationship risks.
Preparing for quantum computing
As quantum computing edges closer to mainstream adoption, organizations must prepare for its impact on data encryption. The U.S. Department of Commerce has called for new encryption tools to counter potential quantum-era cyber threats. This area will require increased focus in the coming year as attackers exploit new vulnerabilities.
Building a resilient future
Organizations must combat the increasing complexity of identity fraud, hackers, cyber security thieves, and data center poachers each year. In addition to all of the threats mentioned above, 2025 will bring an increasing need to address IoT and OT security issues, data protection in the third-party cloud and AI infrastructure, and the use of AI agents in the SOC.
To help thwart this year’s cyber threats, CISOs and CTOs must work together, communicate often, and identify areas to minimize risks for deepfake fraud across identity, brand protection, and employee verification. In doing so, they will hopefully lay the groundwork for handling quantum-era risks in the near future.
Read More from This Article: Looking back to look ahead: from Deepfakes to DeepSeek what lies ahead in 2025
Source: News