Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

How deepfakes are rewriting the rules of the modern workplace

Trust used to be built into the mechanics of work. If a request came through a familiar voice on a call, a known face on screen or a message from a senior executive, most employees had little reason to question it. That assumption is becoming much harder to defend.

What I am seeing now is a shift in where and how deepfakes are being used. Synthetic media is now entering routine business processes, from payment approvals to executive communications and support requests. As more of those interactions have moved into digital channels, they have also become easier to imitate and harder to validate. What once felt like a problem associated mainly with public scams, misinformation or social media, is increasingly becoming a business security issue.

Gartner reports that 62% of organizations have already experienced deepfake-enabled social engineering. That reflects the level of concern I now hear in conversations with CISOs, risk leaders and executive teams.

Most organizations have invested heavily in phishing awareness and email security. Those controls still matter, but they were built around a different model of deception. What makes deepfake attacks so effective is that they do not need to introduce something obviously suspicious. They work by making a fraudulent request feel routine.

Why this threat is becoming so effective

This is happening at the same time that organizations have become more distributed, more digital and more dependent on rapid communication. Video meetings, chat platforms, mobile devices and collaboration software now shape how decisions are made. They allow teams to move quickly, but they also compress judgement. People are expected to respond fast, often with limited context, which creates favourable conditions for manipulation.

In many companies, it is entirely normal to act on an instruction delivered in a meeting, through a messaging platform or via a quick exchange on a mobile device. I do not see that as a flaw in itself; it is simply how modern work operates. The problem is that these patterns were built on the assumption that certain signals – a person’s face, their voice, their style of communication – could usually be trusted.

Recent cases show how broad the problem has become. In one widely reported incident, an employee at engineering firm Arup transferred around $25 million after a video call with people who appeared to be senior colleagues, including the CFO. Another one saw Bombay Stock Exchange chief executive Sundararaman Ramamurthy impersonated in a deepfake video that gave false stock advice to investors. One case targeted an internal approval process, and the other exploited public trust in a recognised business leader. Together, they show how synthetic media can be used to manipulate decisions in very different settings.

The technology behind these attacks is also becoming easier to access. What once required specialist expertise and meaningful investment can now be attempted with inexpensive AI models, publicly available tools and a relatively small amount of source material. A short audio clip can be enough to imitate a voice, and a limited set of images can be enough to construct a persuasive visual impersonation.

The same advances in AI are also shaping the defence side, but that should not distract from the core issue: deepfakes are gaining traction because they fit neatly into existing ways of working. The real challenge is not just detecting manipulated content but reducing the opportunities for trust to be exploited in the first place.

Incident response and governance need to catch up

One thing I have noticed is that most organizations have mature playbooks for phishing, ransomware and data breaches. Far fewer have worked through what happens when manipulated media is used to impersonate a senior leader or trigger a fraudulent approval.

That gap matters. Teams often assume they will handle a deepfake incident under existing fraud or cyber procedures, but when you test those assumptions more closely, the gaps appear very quickly.

Tabletop exercises are particularly valuable here because they reveal where accountability is unclear and where process breaks down under pressure. A scenario involving a fake executive instruction can quickly show whether the right checks are in place.

There is also a broader governance issue that organizations should not ignore. If money is lost or data is exposed, scrutiny will extend well beyond the security team. Board members and regulators will want to know whether the risk was anticipated, and sensible controls were in place.

Frameworks such as the EU’s Digital Operational Resilience Act are part of a wider shift in expectations around resilience and cyber risk. Deepfakes fit directly into that discussion because they affect financial controls, information handling, brand integrity and operational continuity. That means legal, HR, communications and senior leadership all need to be involved because it is as much a business risk issue as it is a security one.

Trust now must be designed, not assumed

For CISOs, the next step is to base trust on policy and process. That means preventing sensitive actions from being completed after a single interaction. If a request involves money, credentials, confidential information or reputational risk, it should trigger a verification process that sits outside the original communication.

This is the advice I keep repeating:

Widen the threat model

Synthetic media should be treated as a serious attack path across email, internal messaging, public platforms and externally facing content. If employees, customers or partners are making trust decisions in those environments, then those channels need to be included in risk planning.

Apply zero-trust thinking to content itself

Many organizations have made progress applying zero-trust principles to access control and identity. They now need to bring that same discipline to what people see, hear and receive. A persuasive video, voice note or document should not be enough on its own to authorise a sensitive action.

Use automated detection at machine speed

Human judgement still matters, but it cannot be the only line of defence when manipulated media can be generated and distributed so quickly. Detection needs to operate across the environment, in real time, where possible, and at a level of sophistication that matches the attacks.

Prepare for deepfake-specific incidents

Response plans should not stop at generic fraud or phishing scenarios. Organizations need playbooks for executive impersonation, payment fraud, brand abuse and manipulated media circulating internally or in public.

Those principles need to be backed by practical controls. In my view, the most important are straightforward:

  • High-risk requests should always be confirmed through a second channel
  • Communication should be separated from authorization
  • Urgent or unusual requests should have a defined escalation path
  • Teams should be clear on which channels are appropriate for discussion and which are appropriate for approval

The organizations that respond well will be the ones that adapt before a serious incident forces them to. I believe deepfakes need to be treated as more than a fraud problem or a technical edge case. They are becoming a test of operational resilience, governance and decision-making under pressure.

Trust will always matter in business. What is changing is the basis on which it is granted. In the modern workplace, it cannot rest on what looks or sounds real. It must be verified, backed by process and treated with the same discipline as any other business control.

This article is published as part of the Foundry Expert Contributor Network.
Want to join?


Read More from This Article: How deepfakes are rewriting the rules of the modern workplace
Source: News

Category: NewsMay 14, 2026
Tags: art

Post navigation

PreviousPrevious post:AI-driven layoffs aren’t making business senseNextNext post:Decision-making speed is a hidden constraint on transformation success

Related posts

AI, power and the trade-off between freedom and innovation
May 14, 2026
Building an AI CoE: Why you need one and how to make it work
May 14, 2026
AI-driven layoffs aren’t making business sense
May 14, 2026
CIOs are put to the test as security regulations across borders recalibrate
May 14, 2026
Decision-making speed is a hidden constraint on transformation success
May 14, 2026
La IA impone a los CIO expectativas que pueden determinar su éxito o su fracaso
May 14, 2026
Recent Posts
  • AI, power and the trade-off between freedom and innovation
  • Building an AI CoE: Why you need one and how to make it work
  • AI-driven layoffs aren’t making business sense
  • CIOs are put to the test as security regulations across borders recalibrate
  • How deepfakes are rewriting the rules of the modern workplace
Recent Comments
    Archives
    • May 2026
    • April 2026
    • March 2026
    • February 2026
    • January 2026
    • December 2025
    • November 2025
    • October 2025
    • September 2025
    • August 2025
    • July 2025
    • June 2025
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.