Skip to content
Tiatra, LLCTiatra, LLC
Tiatra, LLC
Information Technology Solutions for Washington, DC Government Agencies
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact
 
  • Home
  • About Us
  • Services
    • IT Engineering and Support
    • Software Development
    • Information Assurance and Testing
    • Project and Program Management
  • Clients & Partners
  • Careers
  • News
  • Contact

India’s advisory on LLM usage causes consternation

India’s Ministry of Electronics and Information Technology (MeitY) has caused consternation with its stern reminder to makers and users of large language models (LLMs) of their obligations under the country’s IT Act, after Google’s Gemini model was prompted to make derogatory remarks about Indian Prime Minister Narendra Modi.

The ministry’s reaction, in the form of an advisory issued Friday, has attracted criticism from India’s IT sector because of the restrictions it places on innovation and the compliance risk it places on some enterprises.

The advisory, obtained by The Register, builds on an earlier one issued in December, reminding organizations of the law and going on to impose additional restrictions. Notably, it requires all intermediaries and platforms to ensure that their systems — whether using generative AI or not — do not permit bias or discrimination or threaten the integrity of the electoral process. It also requires that LLMs that are unreliable or still under test only be made available in Indian Internet with explicit permission from the government, and only be deployed accompanied by a warning of their unreliability.

It also recommends that AI-generated materials that could be used for misinformation or deep fakes, whether text, audio, image, or video, be watermarked to identify their nature and origin, and reiterates existing rules on digital media ethics.

Numerous IT vendors are likely to be affected by the advisory, including cloud service providers such as Oracle, Amazon Web Services (AWS), and IBM; software vendors such as Databricks and Salesforce; model service providers (mostly startups) such as OpenAI, Anthropic, Stability AI, and Cohere, along with social platforms such as Meta.

Email queries sent to the IT ministry seeking more clarity on the government’s planned framework for LLM regulation went unanswered.

Lack of clarity and absence of a defined framework

The lack of clarity in the advisory saw many from the technology sector take to platforms such as X to provide their take on the issue, including Minister of State for IT Rajeev Chandrasekhar, who was forced to clarify in a tweet on Monday that the requirement to seek permission to deploy LLMs is “only for large platforms and will not apply to startups.”

But that clarification is not enough for some analysts.

“The process of granting permission is not clear and what vendors need to do to get the permission is unclear as well. Are there test cases they have to pass, or assurances given on level of testing and support?” said Pareekh Jain, principal analyst with Jain Consulting.

As for the requirement that unreliable models be accompanied by a warning,  Google, Microsoft, and Meta already have that covered. Google’s FAQ page for Gemini clearly states it will get things wrong and invites users to report responses that need correction. Similarly, ChatGPT’s FAQ page also warns it may provide incorrect responses and invites users to report them.

Can LLMS be free from bias?

The advisory also calls on LLM providers to ensure that their models are free from any bias or discrimination, a tall order according to analysts.

“There is always a possibility of some bias. While bias is not anticipated, it cannot be disregarded that the possibility exists, regardless of its magnitude,” and that would make the requirement difficult to comply with, said DD Mishra, senior analyst and director with Gartner.

Venkatesh Natarajan, former chief digital officer of Ashok Leyland, said that achieving a completely unbiased model is challenging due to factors such as data biases and inherent limitations of AI algorithms.

“While hyperscalers can implement measures to mitigate bias, claiming absolute neutrality may not be feasible. This could expose them to legal risks, especially if their models inadvertently perpetuate biases or discrimination,” the former CDO explained.

While the hyperscalers and other model providers cannot be ensure the absence of any kind of bias in their models, IDC analyst Deepika Giri said they can make efforts to offer more transparency around their bias-mitigation efforts.

And Giri said, they should focus on using good-quality training data.

Email queries sent to Microsoft, AWS, Oracle and other model providers concerning the advisory went unanswered.

Making detecting AI-generated content easier

The advisory’s recommendation that LLM providers watermark all generated content that could be used for deception may also prove problematic.

Meta is developing tools to identify images produced by generative AI at scale across its social media platforms — Facebook, Instagram, and Threads — but has no such tools for detecting generated audio and video. Google, too, has its own algorithms for detecting AI-generated content but has not made any announcements on this front.

What’s missing is a common standard for all technology providers to follow, experts said.

Such a standard would be useful elsewhere too: If the European Union’s AI Act is approved in April then it will introduce strict transparency obligations on providers and deployers of AI to label deep fakes and watermark AI-generated content.

Impact of the advisory on LLM providers and enterprises

 

Experts and analysts said the advisory, if not clarified further, could lead to significant loss of business for LLM providers and their customers, while stifling innovation.

“The advisory will put the brakes on the progress in releasing these models in India. It will have a significant impact on the overall environment as a lot of businesses are counting on this technology,” Gartner’s Mishra said.

IDC’s Giri said that the advisory might lead early adopters of the technology to rush to upgrade their applications to ensure adherence to the advisory.

“Adjustments to release processes, increased transparency, and ongoing monitoring to meet regulatory standards could cause delays and increase operational costs. A stricter examination of AI models may limit innovation and market expansion, potentially resulting in missed opportunities,” Giri said.

Tejasvi Addagada, an IT leader, believes that prioritizing compliance and ethical AI use can build trust with customers and regulators, offering long-term benefits such as enhanced reputation and market differentiation.

Startup exclusion creates room for confusion

The Minister of State for IT’s tweet excluding startups from the new requirements has caused further controversy, with some wondering whether it could result in lawsuits from larger companies alleging anticompetitive practices.

“The exemption of startups from the advisory might raise concerns about competition laws if it gives them an unfair advantage over established companies,” Natarajan said.

While model providers such as OpenAI, Stability AI, Anthropic, Midjourney, and Groq, are widely considered to be startups, these companies do not fit the Indian government’s definition of startups as set by the Department for Promotion of Industry and Internal Trade (DPIIT), which would require them to incorporate in India under the Companies Act 2013.

The tweak in policy to exclude startups seems to be an afterthought, Mishra said, as many smaller innovative startups are also under significant threat as their entire business revolves around AI and LLMs.

Experts expect further clarification from the government after the expiry of the 15-day period the advisory gives LLM providers to file reports on their actions and the status of their models.

Artificial Intelligence, Generative AI, Regulation
Read More from This Article: India’s advisory on LLM usage causes consternation
Source: News

Category: NewsMarch 5, 2024
Tags: art

Post navigation

PreviousPrevious post:The change management Informatica needed to overhaul its business modelNextNext post:DXにおける黒魔術とその取得方法

Related posts

휴먼컨설팅그룹, HR 솔루션 ‘휴넬’ 업그레이드 발표
May 9, 2025
Epicor expands AI offerings, launches new green initiative
May 9, 2025
MS도 합류··· 구글의 A2A 프로토콜, AI 에이전트 분야의 공용어 될까?
May 9, 2025
오픈AI, 아시아 4국에 데이터 레지던시 도입··· 한국 기업 데이터는 한국 서버에 저장
May 9, 2025
SAS supercharges Viya platform with AI agents, copilots, and synthetic data tools
May 8, 2025
IBM aims to set industry standard for enterprise AI with ITBench SaaS launch
May 8, 2025
Recent Posts
  • 휴먼컨설팅그룹, HR 솔루션 ‘휴넬’ 업그레이드 발표
  • Epicor expands AI offerings, launches new green initiative
  • MS도 합류··· 구글의 A2A 프로토콜, AI 에이전트 분야의 공용어 될까?
  • 오픈AI, 아시아 4국에 데이터 레지던시 도입··· 한국 기업 데이터는 한국 서버에 저장
  • SAS supercharges Viya platform with AI agents, copilots, and synthetic data tools
Recent Comments
    Archives
    • May 2025
    • April 2025
    • March 2025
    • February 2025
    • January 2025
    • December 2024
    • November 2024
    • October 2024
    • September 2024
    • August 2024
    • July 2024
    • June 2024
    • May 2024
    • April 2024
    • March 2024
    • February 2024
    • January 2024
    • December 2023
    • November 2023
    • October 2023
    • September 2023
    • August 2023
    • July 2023
    • June 2023
    • May 2023
    • April 2023
    • March 2023
    • February 2023
    • January 2023
    • December 2022
    • November 2022
    • October 2022
    • September 2022
    • August 2022
    • July 2022
    • June 2022
    • May 2022
    • April 2022
    • March 2022
    • February 2022
    • January 2022
    • December 2021
    • November 2021
    • October 2021
    • September 2021
    • August 2021
    • July 2021
    • June 2021
    • May 2021
    • April 2021
    • March 2021
    • February 2021
    • January 2021
    • December 2020
    • November 2020
    • October 2020
    • September 2020
    • August 2020
    • July 2020
    • June 2020
    • May 2020
    • April 2020
    • January 2020
    • December 2019
    • November 2019
    • October 2019
    • September 2019
    • August 2019
    • July 2019
    • June 2019
    • May 2019
    • April 2019
    • March 2019
    • February 2019
    • January 2019
    • December 2018
    • November 2018
    • October 2018
    • September 2018
    • August 2018
    • July 2018
    • June 2018
    • May 2018
    • April 2018
    • March 2018
    • February 2018
    • January 2018
    • December 2017
    • November 2017
    • October 2017
    • September 2017
    • August 2017
    • July 2017
    • June 2017
    • May 2017
    • April 2017
    • March 2017
    • February 2017
    • January 2017
    Categories
    • News
    Meta
    • Log in
    • Entries feed
    • Comments feed
    • WordPress.org
    Tiatra LLC.

    Tiatra, LLC, based in the Washington, DC metropolitan area, proudly serves federal government agencies, organizations that work with the government and other commercial businesses and organizations. Tiatra specializes in a broad range of information technology (IT) development and management services incorporating solid engineering, attention to client needs, and meeting or exceeding any security parameters required. Our small yet innovative company is structured with a full complement of the necessary technical experts, working with hands-on management, to provide a high level of service and competitive pricing for your systems and engineering requirements.

    Find us on:

    FacebookTwitterLinkedin

    Submitclear

    Tiatra, LLC
    Copyright 2016. All rights reserved.