The recent government advisory asking companies to take permission before launching generative artificial intelligence models may not have a solid legal foundation, technology lawyers told ET.According to them, there are questions about the statutory basis and enforcement competence of such an advisory, especially concerning permission of the government for under-testing models. Terms like “bias” and “unreliability” are too vague and companies may not be able to ensure compliance even with the best intent, they argued.Elevate Your Tech Prowess with High-Value Skill CoursesOffering CollegeCourseWebsiteMITMIT Technology Leadership and InnovationVisitIndian School of BusinessISB Product ManagementVisitIndian School of BusinessISB Professional Certificate in Product ManagementVisit“It is not clear which legal provision MeitY (Ministry of Electronics and Information Technology) is relying on for this requirement and what it will cite to legally enforce this. There is no definition of ‘unreliable’ in the advisory either,” said Ranjana Adhikari, partner at IndusLaw.To be sure, IT minister Ashwini Vaishnaw said on Monday that the advisory was not legally binding.When serious regulatory objectives such as election integrity are at stake, the regulatory framework must have unimpeachable statutory validity, precise definitions, clear obligations and predictable enforcement mechanisms, said Dhruv Garg, a lawyer and tech policy advisor.“Piecemeal regulation is not the answer. AI regulation is a complex techno-legal issue and must be done through a comprehensive legislative process centred on wide public consultation,” Garg said.Discover the stories of your interestBlockchain5 StoriesCyber-safety7 StoriesFintech9 StoriesE-comm9 StoriesML8 StoriesEdtech6 StoriesAaron Solomon, managing partner, Solomon & Co, said: “In our view, the defined meaning of an intermediary would not extend to ChatGPT, Gemini, Krutrim and Perplexity AI, which are genAI technologies, and the exemption granted under the IT Act to intermediaries would not apply to OpenAI, Google, Ola and Perplexity.”Section 79 of the IT Act provides an exemption to intermediaries on third-party content and has not been designed to protect entities which provide content generated by themselves, he reasoned. He was referring to the safe harbour provision under which intermediaries are protected from liability for third-party content.People+ai, an initiative by EkStep Foundation, is collating views from India’s AI community and startup ecosystem on the recent government advisory. EkStep is an organisation cofounded by Infosys cofounder Nandan Nilekani.People+ai is “gathering insights from Indian startup founders on their concerns and aspirations regarding the responsible development and deployment of AI in India”, said a form created by it seeking responses from startups.“Your feedback will be shared with policymakers and the broader AI community to shape future regulations that foster innovation and societal good,” it said.Conversational AI platform Haptik’s chief executive Aakrit Vaish told ET that the form went live on Wednesday and once they have enough responses, he along with members of People+ai would meet MeitY officials.“The biggest question is that of applicability. If a company builds AI products, do they have to submit details of their model to the government, or do fine-tuned models also come under the purview,” Vaish said.Two of the questions on the form are: What are your short-term concerns regarding AI regulation in India? What other aspects of AI development and deployment in India would you like the Government of India to support?ET could not immediately reach People+ai head Tanuj Bhojwani and director of strategy and operations Tanvi Lall.(Dia Rekhi in Chennai contributed to this article.)