India moves to regulate AI tools, experts say too much: Story in 5 points

MeitY has announced that AI models or programs that haven't been thoroughly tested or aren't reliable should only be available on the internet in India if the government has given its approval. Industry experts and tech leaders, however, are not too happy with the advisory.

Listen to Story

Advertisement
Representational Image generated by AI
Representational Image generated by AI

In Short

  • India's IT Ministry requires AI tools to get government approval before launch.
  • Rule applies to 'significant' platforms, not startups, says IT Minister.
  • Tech industry leaders criticise new advisory as hindrance to AI progress.

The Ministry of Electronics and Information Technology on Friday announced that platforms in India that are testing or training an artificial intelligence (AI) tool, will need to seek permission from the government before launching the product. “The use of under-testing / unreliable Artificial Intelligence model(s) /LLM/Generative/AI, software(s) or algorithm(s) and its availability to the users on Indian Internet must be done so with explicit permission of the Government of India," the advisory reads.

advertisement

On Monday, Rajeev Chrandrasekhar, Union Minister of State for IT, clarified that this rule applies only to “significant” platforms and not to startups in India. “Advisory is aimed at the Significant platforms and permission seeking from Meity is only for large platforms and will not apply to startups,” he said in a post on X on Monday. Chandrasekhar did not clarify what he meant by “significant platforms”.

The advisory falls under the new IT rules. Here is everything that the new AI regulation advisory said, and the reaction of the experts in 5 points.

--The latest advisory on AI suggests that AI models or programs that haven't been thoroughly tested or aren't reliable, such as the Large Language Models (LLM) or Generative AI, should only be available on the internet in India if the government has given its approval. The advisory reads that these tools can be “deployed only after appropriately labeling the possible and inherent fallibility or unreliability of the output generated. Further, ‘consent popup’ mechanism may be used to explicitly inform the users about the possible inherent fallibility or unreliability of output generated”.

--In a statement to ANI, Chandrasekhar said that if an AI platform is "under testing or unreliable, it does not absolve them from the consequences of the law, especially criminal law". "So we advise that any such platform must have a label that says it is under testing. And most importantly, must inform the consumer through a consent mechanism that it may output erroneous and unreliable information," he said. The minister used Gemini AI’s recent issue as an example and said that the advisory has been released so that anything that is in “violation of the law by outputting unlawful content, is not repeated”.

--The minister also believes that the advisory will benefit the platforms. He told ANI that the new IT rule will help "platforms to be a lot more disciplined about taking their AI models and platforms from the lab directly to the market". "And we do not want that to happen without sufficient guardrails, information and disclaimers in place, so that consumers know what is unreliable before they start using it," he added.

--In a follow-up post to the advisory, the Union IT minister has also clarified that the new rule applies to “significant” platforms and not to startups. “Process of seeking permission, labelling and consent-based disclosure to user about untested platforms is an insurance policy to platforms who can otherwise be sued by consumers,” Chandrasekhar added in his post on X.

advertisement

--Many experts and tech leaders in the industry, however, are not too happy with the advisory. Bindu Reddy, who is the CEO of Abacus AI and has formerly worked with Amazon and Google, says with the new advisory “India just kissed its future goodbye!”.

Thilak Rao, who is the creator of Private LLM, says “What a way to put the brakes on AI in India”, reacting to the advisory.

Nikhil Pahwa, who is the founder of Medianama, even calls the rules “illegal”.

Chandrasekhar recently announced that the government is working on an AI regulation framework, set to be out in June or July 2024. The aim is to use AI for economic progress while managing any potential dangers. Plans to release the framework have been on hold for some time, with the IT Minister initially planning to unveil the first draft in May 2023.

Published By:
Nandini Yadav
Published On:
Mar 4, 2024
Tune In