Legal challenges
One of the most persistent and contentious issues in Internet governance has been the fixing of liability on “intermediaries” for content hosted by them.
The Shreya Singhal judgment - upheld Section 79 of the IT Act which grants intermediaries ‘safe harbour’ protection against hosting content, contingent upon meeting the due diligence requirements outlined in Section 3(1)(b) of the IT (Intermediaries Guidelines) Rules.
However, its application to Generative AI tools remains challenging.
In Christian Louboutin Sas vs Nakul Bajaj and Ors (2018), the Delhi High Court held that safe harbour protection applies solely to “passive” intermediaries, referring to entities functioning as mere conduits or passive transmitters of information.
However, in the context of Large Language Models (LLMs), making a distinction between user-generated and platform-generated content is increasingly challenging.
Additionally, liability in the case of AI chatbots arises once the information is reposted on other platforms by the user; mere response to a user prompt is not considered dissemination.
Generative AI outputs have already led to legal conflicts in various jurisdictions.
In June 2023, a radio host in the United States filed a lawsuit against Open AI, alleging that Chat GPT had defamed him.
The ambiguity in classifying GAI tools, whether as intermediaries, conduits, or active creators, will complicate the ability of courts to assign liability, particularly in user reposts.
Generative AI outputs have already led to legal conflicts in various jurisdictions.
In June 2023, a radio host in the United States filed a lawsuit against Open AI, alleging that Chat GPT had defamed him.
The ambiguity in classifying GAI tools, whether as intermediaries, conduits, or active creators, will complicate the ability of courts to assign liability, particularly in user reposts.
The landmark K.S. Puttaswamy judgment (2017) by the Supreme Court of India established a strong foundation for privacy jurisprudence in the country, leading to the enactment of the DPDP Act, 2023
While traditional data aggregators or consent managers raise privacy concerns during the collection and distribution of personal information, Generative AI introduces a new layer of complexity.
Proposed solutions
First, learning by doing. Consider granting GAI platforms temporary immunity from liability following a sandbox approach.
This approach allows responsible development while gathering data to identify legal issues that could inform future laws and regulations.
Second, data rights and responsibilities. The process of data acquisition for GAI training requires an overhaul.
Developers must prioritise legal compliance by ensuring proper licensing and compensation for the intellectual property used in training models.
Third, licensing challenges. Licensing data for GAI is complex as web-data lacks a centralised licensing body similar to copyright societies in the music industry.
A potential solution is the creation of centralised platforms, akin to stock photo websites such as Getty Images, which simplify licensing, streamline access to necessary data for developers and ensure data integrity against historical bias and discrimination.
COMMENTS