There was a time we lived by the adage – seeing is believing. Now, times have changed. While Artificial Intelligence has evolved in hyper speed –from a simple algorithm to a sophisticated system, deepfakes have emerged as one its more chaotic offerings.
A deepfake, now used as a noun (i.e., This is a ‘deepfake’), actually refers to the process of using artificial intelligence to produce or modify movies, images, or audio so that they seem real but are actually altered or synthesized.
Consequences snowballed, and quickly – In 2022, a viral deepfake audio of the CEO of Mumbai energy company declaring a massive price hike temporarily tanked the company’s stock due to shareholders’ panic. While swiftly debunked, it was a sobering reality – deepfakes could erode trust built over decades in a single moment.
The ‘Mistrust’ factor
As lines blur between reality and its facsimile, no industry has seen a bigger trust erosion than the FSI sector. When Zerodha CEO Nikhil Kamath shared a deepfake video of himself, it became clear that even the most rational among us could be fooled.
He pointed out, “Currently, services like Digilocker or Aadhar are approved by matching the face on the ID proof with the person’s face. As deepfakes improve, I think it will only become harder to validate if the person on the other side is real or AI-generated.”
Vamsi Ithamraju, CTO, Axis Mutual Fund reiterates, “Consider a scenario where a deep fake impersonates a business leader, alleging false information that could influence stock prices or market dynamics. This can have serious consequences on the economy.”
These scenarios have a disturbing impact on citizens, especially in a country like India where high-speed internet and communication apps allow photos and videos to be shared within seconds with little verification.
Playing by the rules
Public faith in technologies cannot be established without valid foundation. It needs systems of governance and monitoring to keep up the same slick pace as technology.
CIOs are unanimous in their opinion that strongly enabled government bodies are the way forward in ensuring that deepfakes can stay in the public domain without harming organizations, reputation and economy.
Jyothirlatha B, CTO, Godrej Capital, says, “Governments may need to establish regulatory bodies to oversee the ethical use of AI and enforce compliance, while public awareness campaigns will educate individuals about the risks of deepfakes.” She believes that enhanced verification protocols, such as multi-factor authentication and biometric verification can reduce the risk of deepfake exploitation.
Khushbu Jain, Technology Law expert and Partner, Ark Legal posits that addressing the risks posed by deepfakes requires a multi-pronged regulatory approach, as well as public awareness spearheaded by multi-stakeholder collaboration.
However, she acknowledges that the law alone is insufficient. “An ethical governance framework for AI that prioritizes transparency, inclusivity and proactive self-regulation must complement stronger legal deterrence,” says Khushbu.
The ethical conundrum
With artificial intelligence cemented into the ether of modern life, any concerns about its negligent or criminal impact are quickly dismissed as being anti-progress.
CIOs however, are very cognizant of the ethical conundrums posed by deepfakes. KV Dipu, Senior President, Bajaj Alliance General Insurance references McKinsey’s report that highlights that while AI can increase operational efficiency by up to 30%, it also introduces significant ethical challenges related to data privacy, algorithmic bias, and transparency.
Ajay Poddar, CTO, HDFC Retirement Tech firmly that with great power, comes great responsibility. “To avoid reputational damage and legal issues, organizations should consider ethical implications while working on AI innovation,” he says. He categorizes his strategy as such:
- Ethical Guidelines & Transparency
- Risk Mitigation & Assessment
- Human Oversight/Accountability
To combat the plethora of ethical dilemmas and privacy risks, Vamsi Ithamraju suggests Explainable AI (XAI), a set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms.
Explainable AI (XAI) helps dispel the ‘black box’ nature of AI, allowing developers to turn the opaque box nature of AI transparent and ensure it is running correctly, and changing it if necessary. It has emerged as one of the tenets of implementing responsible AI, with fairness, model explainability and accountability.
Scoping solutions
The finance sector is particularly unarmoured because it gets targeted for two assets – data and money. Scamming people instead of breaking firewalls – targeting their feelings of confusion and fear has proved lucrative for scammers. These tactics also make it harder to trace the culprits behind deepfake attacks.
As Ajay Poddar sums it up for us, we need a balance between regulation and innovation.
He says, “Regulators can implement certain rules – like stricter penalties for Deepfake-Related Crimes, especially with a monetary punishment. Organizations and platforms should be obliged to disclose that an image or video is deepfake with a watermark.” He also stresses the importance of ethical AI development to combat rise of malicious deepfake distribution.
Ultimately, all CIOs are in agreement that the strongest pillar in building a deepfake defense is awareness in the common public, which requires media literacy. The distribution of information, images and video has changed irreversibly, and the channels and platforms no longer have the same centricity and monitoring of yesterdays. Media has transcended borders, geographically and demographically, and with people sharing their lives online, money, reputation and professions are at stake.
CIOs agree that this is a crucial period for setting standards in place, which will act as a defense in the future, as deepfakes attempt to erase reality with facsimile. Finally, Advocate (Dr.) Prashant Mali, Cyber Lawyer and Policy Expert warns us of an AI mayhem wrecking our social fabric if deepfake is not censored via legislation and strict implementation of the same. “The government should invest funds in educating the common man about deepfakes in a vernacular language reaching rural India.”
Read More from This Article: Deepfakes are a real threat to India’s FSI sector, say tech leaders
Source: News