Trust used to be built into the mechanics of work. If a request came through a familiar voice on a call, a known face on screen or a message from a senior executive, most employees had little reason to question it. That assumption is becoming much harder to defend.
What I am seeing now is a shift in where and how deepfakes are being used. Synthetic media is now entering routine business processes, from payment approvals to executive communications and support requests. As more of those interactions have moved into digital channels, they have also become easier to imitate and harder to validate. What once felt like a problem associated mainly with public scams, misinformation or social media, is increasingly becoming a business security issue.
Gartner reports that 62% of organizations have already experienced deepfake-enabled social engineering. That reflects the level of concern I now hear in conversations with CISOs, risk leaders and executive teams.
Most organizations have invested heavily in phishing awareness and email security. Those controls still matter, but they were built around a different model of deception. What makes deepfake attacks so effective is that they do not need to introduce something obviously suspicious. They work by making a fraudulent request feel routine.
Why this threat is becoming so effective
This is happening at the same time that organizations have become more distributed, more digital and more dependent on rapid communication. Video meetings, chat platforms, mobile devices and collaboration software now shape how decisions are made. They allow teams to move quickly, but they also compress judgement. People are expected to respond fast, often with limited context, which creates favourable conditions for manipulation.
In many companies, it is entirely normal to act on an instruction delivered in a meeting, through a messaging platform or via a quick exchange on a mobile device. I do not see that as a flaw in itself; it is simply how modern work operates. The problem is that these patterns were built on the assumption that certain signals – a person’s face, their voice, their style of communication – could usually be trusted.
Recent cases show how broad the problem has become. In one widely reported incident, an employee at engineering firm Arup transferred around $25 million after a video call with people who appeared to be senior colleagues, including the CFO. Another one saw Bombay Stock Exchange chief executive Sundararaman Ramamurthy impersonated in a deepfake video that gave false stock advice to investors. One case targeted an internal approval process, and the other exploited public trust in a recognised business leader. Together, they show how synthetic media can be used to manipulate decisions in very different settings.
The technology behind these attacks is also becoming easier to access. What once required specialist expertise and meaningful investment can now be attempted with inexpensive AI models, publicly available tools and a relatively small amount of source material. A short audio clip can be enough to imitate a voice, and a limited set of images can be enough to construct a persuasive visual impersonation.
The same advances in AI are also shaping the defence side, but that should not distract from the core issue: deepfakes are gaining traction because they fit neatly into existing ways of working. The real challenge is not just detecting manipulated content but reducing the opportunities for trust to be exploited in the first place.
Incident response and governance need to catch up
One thing I have noticed is that most organizations have mature playbooks for phishing, ransomware and data breaches. Far fewer have worked through what happens when manipulated media is used to impersonate a senior leader or trigger a fraudulent approval.
That gap matters. Teams often assume they will handle a deepfake incident under existing fraud or cyber procedures, but when you test those assumptions more closely, the gaps appear very quickly.
Tabletop exercises are particularly valuable here because they reveal where accountability is unclear and where process breaks down under pressure. A scenario involving a fake executive instruction can quickly show whether the right checks are in place.
There is also a broader governance issue that organizations should not ignore. If money is lost or data is exposed, scrutiny will extend well beyond the security team. Board members and regulators will want to know whether the risk was anticipated, and sensible controls were in place.
Frameworks such as the EU’s Digital Operational Resilience Act are part of a wider shift in expectations around resilience and cyber risk. Deepfakes fit directly into that discussion because they affect financial controls, information handling, brand integrity and operational continuity. That means legal, HR, communications and senior leadership all need to be involved because it is as much a business risk issue as it is a security one.
Trust now must be designed, not assumed
For CISOs, the next step is to base trust on policy and process. That means preventing sensitive actions from being completed after a single interaction. If a request involves money, credentials, confidential information or reputational risk, it should trigger a verification process that sits outside the original communication.
This is the advice I keep repeating:
Widen the threat model
Synthetic media should be treated as a serious attack path across email, internal messaging, public platforms and externally facing content. If employees, customers or partners are making trust decisions in those environments, then those channels need to be included in risk planning.
Apply zero-trust thinking to content itself
Many organizations have made progress applying zero-trust principles to access control and identity. They now need to bring that same discipline to what people see, hear and receive. A persuasive video, voice note or document should not be enough on its own to authorise a sensitive action.
Use automated detection at machine speed
Human judgement still matters, but it cannot be the only line of defence when manipulated media can be generated and distributed so quickly. Detection needs to operate across the environment, in real time, where possible, and at a level of sophistication that matches the attacks.
Prepare for deepfake-specific incidents
Response plans should not stop at generic fraud or phishing scenarios. Organizations need playbooks for executive impersonation, payment fraud, brand abuse and manipulated media circulating internally or in public.
Those principles need to be backed by practical controls. In my view, the most important are straightforward:
- High-risk requests should always be confirmed through a second channel
- Communication should be separated from authorization
- Urgent or unusual requests should have a defined escalation path
- Teams should be clear on which channels are appropriate for discussion and which are appropriate for approval
The organizations that respond well will be the ones that adapt before a serious incident forces them to. I believe deepfakes need to be treated as more than a fraud problem or a technical edge case. They are becoming a test of operational resilience, governance and decision-making under pressure.
Trust will always matter in business. What is changing is the basis on which it is granted. In the modern workplace, it cannot rest on what looks or sounds real. It must be verified, backed by process and treated with the same discipline as any other business control.
This article is published as part of the Foundry Expert Contributor Network.
Want to join?
Read More from This Article: How deepfakes are rewriting the rules of the modern workplace
Source: News

