India’s Ministry of Electronics and Information Technology (MeitY) has caused consternation with its stern reminder to makers and users of large language models (LLMs) of their obligations under the country’s IT Act, after Google’s Gemini model was prompted to make derogatory remarks about Indian Prime Minister Narendra Modi.
The ministry’s reaction, in the form of an advisory issued Friday, has attracted criticism from India’s IT sector because of the restrictions it places on innovation and the compliance risk it places on some enterprises.
The advisory, obtained by The Register, builds on an earlier one issued in December, reminding organizations of the law and going on to impose additional restrictions. Notably, it requires all intermediaries and platforms to ensure that their systems — whether using generative AI or not — do not permit bias or discrimination or threaten the integrity of the electoral process. It also requires that LLMs that are unreliable or still under test only be made available in Indian Internet with explicit permission from the government, and only be deployed accompanied by a warning of their unreliability.
It also recommends that AI-generated materials that could be used for misinformation or deep fakes, whether text, audio, image, or video, be watermarked to identify their nature and origin, and reiterates existing rules on digital media ethics.
Numerous IT vendors are likely to be affected by the advisory, including cloud service providers such as Oracle, Amazon Web Services (AWS), and IBM; software vendors such as Databricks and Salesforce; model service providers (mostly startups) such as OpenAI, Anthropic, Stability AI, and Cohere, along with social platforms such as Meta.
Email queries sent to the IT ministry seeking more clarity on the government’s planned framework for LLM regulation went unanswered.
Lack of clarity and absence of a defined framework
The lack of clarity in the advisory saw many from the technology sector take to platforms such as X to provide their take on the issue, including Minister of State for IT Rajeev Chandrasekhar, who was forced to clarify in a tweet on Monday that the requirement to seek permission to deploy LLMs is “only for large platforms and will not apply to startups.”
But that clarification is not enough for some analysts.
“The process of granting permission is not clear and what vendors need to do to get the permission is unclear as well. Are there test cases they have to pass, or assurances given on level of testing and support?” said Pareekh Jain, principal analyst with Jain Consulting.
As for the requirement that unreliable models be accompanied by a warning, Google, Microsoft, and Meta already have that covered. Google’s FAQ page for Gemini clearly states it will get things wrong and invites users to report responses that need correction. Similarly, ChatGPT’s FAQ page also warns it may provide incorrect responses and invites users to report them.
Can LLMS be free from bias?
The advisory also calls on LLM providers to ensure that their models are free from any bias or discrimination, a tall order according to analysts.
“There is always a possibility of some bias. While bias is not anticipated, it cannot be disregarded that the possibility exists, regardless of its magnitude,” and that would make the requirement difficult to comply with, said DD Mishra, senior analyst and director with Gartner.
Venkatesh Natarajan, former chief digital officer of Ashok Leyland, said that achieving a completely unbiased model is challenging due to factors such as data biases and inherent limitations of AI algorithms.
“While hyperscalers can implement measures to mitigate bias, claiming absolute neutrality may not be feasible. This could expose them to legal risks, especially if their models inadvertently perpetuate biases or discrimination,” the former CDO explained.
While the hyperscalers and other model providers cannot be ensure the absence of any kind of bias in their models, IDC analyst Deepika Giri said they can make efforts to offer more transparency around their bias-mitigation efforts.
And Giri said, they should focus on using good-quality training data.
Email queries sent to Microsoft, AWS, Oracle and other model providers concerning the advisory went unanswered.
Making detecting AI-generated content easier
The advisory’s recommendation that LLM providers watermark all generated content that could be used for deception may also prove problematic.
Meta is developing tools to identify images produced by generative AI at scale across its social media platforms — Facebook, Instagram, and Threads — but has no such tools for detecting generated audio and video. Google, too, has its own algorithms for detecting AI-generated content but has not made any announcements on this front.
What’s missing is a common standard for all technology providers to follow, experts said.
Such a standard would be useful elsewhere too: If the European Union’s AI Act is approved in April then it will introduce strict transparency obligations on providers and deployers of AI to label deep fakes and watermark AI-generated content.
Impact of the advisory on LLM providers and enterprises
Experts and analysts said the advisory, if not clarified further, could lead to significant loss of business for LLM providers and their customers, while stifling innovation.
“The advisory will put the brakes on the progress in releasing these models in India. It will have a significant impact on the overall environment as a lot of businesses are counting on this technology,” Gartner’s Mishra said.
IDC’s Giri said that the advisory might lead early adopters of the technology to rush to upgrade their applications to ensure adherence to the advisory.
“Adjustments to release processes, increased transparency, and ongoing monitoring to meet regulatory standards could cause delays and increase operational costs. A stricter examination of AI models may limit innovation and market expansion, potentially resulting in missed opportunities,” Giri said.
Tejasvi Addagada, an IT leader, believes that prioritizing compliance and ethical AI use can build trust with customers and regulators, offering long-term benefits such as enhanced reputation and market differentiation.
Startup exclusion creates room for confusion
The Minister of State for IT’s tweet excluding startups from the new requirements has caused further controversy, with some wondering whether it could result in lawsuits from larger companies alleging anticompetitive practices.
“The exemption of startups from the advisory might raise concerns about competition laws if it gives them an unfair advantage over established companies,” Natarajan said.
While model providers such as OpenAI, Stability AI, Anthropic, Midjourney, and Groq, are widely considered to be startups, these companies do not fit the Indian government’s definition of startups as set by the Department for Promotion of Industry and Internal Trade (DPIIT), which would require them to incorporate in India under the Companies Act 2013.
The tweak in policy to exclude startups seems to be an afterthought, Mishra said, as many smaller innovative startups are also under significant threat as their entire business revolves around AI and LLMs.
Experts expect further clarification from the government after the expiry of the 15-day period the advisory gives LLM providers to file reports on their actions and the status of their models.
Artificial Intelligence, Generative AI, Regulation
Read More from This Article: India’s advisory on LLM usage causes consternation
Source: News