Digital Jurisprudence in India, in an AI Era


    Syllabus: GS3/IT & Computers; Artificial Intelligence

    • Even though Generative AI stands as a transformative force, wielding power to revolutionise society in ground-breaking ways, existing legal frameworks and judicial precedents that have been designed for a pre-AI world may struggle to effectively govern this rapidly-evolving technology.
    • Generative AI refers to AI systems capable of creating new data, whether it’s text, images, or code. These systems have gained prominence due to their ability to generate human-like content.
    • It is driven by advancements in Large Language Models (LLMs), such as ChatGPT, that have the capability to generate new data, whether it’s text, images, or code.
    • Their potential impact is staggering, with predictions that LLMs alone could contribute trillions of dollars annually to the global economy.
    • Its potential applications are likely revenue generation, blogging and reach through efficient communication, making of logo and imagery with using DALL.E, copywriting and coding using GitHub, Copilot and ChatGPT can generate code and help with developer productivity.
    • Generative AI can be used for generating synthetic data, summarisation of data, and simplifying complex queries, monitoring and reviewing different tasks, along with creation of complex design and many more.
    • Bad actors can exploit GAI to create indistinguishable artificial entities, leading to misinformation, disinformation, and security breaches.
      • Cloned AI voices can circumvent authentication measures, and AI-generated deep fakes can disrupt elections.
    • US vs. India: In the U.S., only human beings can own copyright, which means most AI-generated output lacks copyright protection.
      • However, India has taken a different stance, granting joint authorship to works generated by AI, leading to complexity.
    • Plagiarism and Copyright Infringement: GenAI models can mimic human-created content, potentially raising copyright infringement issues.
    • Essential Inputs: If a few companies control essential inputs for GenAI (such as data), they could distort competition in GenAI markets.
    • Data Dominance: Companies controlling foundational data could wield significant influence over economic activity driven by GenAI.
    • Malicious Use: GenAI models can be misused for disinformation campaigns, deep fakes, and text or voice cloning.
    • Cybersecurity Risks: GenAI’s sophistication can lead to novel attack vectors and data breaches.
    • Inherited Biases: GenAI can perpetuate biases present in training data, leading to biassed outputs.
    • Privacy and Security: Personal data privacy and compromised privacy are critical concerns.
    • Misinformation: GenAI could inadvertently propagate misinformation.
    • Lack of Jurisprudence: The legal framework around GenAI remains unclear, demanding comprehensive re-evaluation.
    • Fair Competition: Ensuring fair competition and preventing antitrust violations is crucial.
    • The legal framework around GAI is still evolving. A comprehensive re-evaluation of existing digital jurisprudence is necessary to address the challenges posed by GAI.
    • Legal Frameworks: The rapid evolution of Generative AI poses a challenge to existing legal frameworks and judicial precedents designed for a pre-AI world.
    • Information Technology Act, 2000: Provides legal recognition for electronic records and signatures.
      • Addresses authentication, electronic governance, and regulation of certifying authorities.
    • Safe Harbour and Liability Fixation: The landmark Shreya Singhal judgement upheld Section 79 of the IT Act, granting intermediaries ‘safe harbour’ protection against hosted content.
      • However, applying this to Generative AI tools remains challenging.
      • Large Language Models (LLMs) blur the distinction between user-generated and platform-generated content, complicating liability assignment.
    • Copyright Conundrum: Generative AI outputs have led to legal conflicts globally. In the US, a radio host sued Open AI, alleging defamation by Chat GPT.
      • Classifying GAI tools as intermediaries, conduits, or active creators affects liability determination, especially in user reposts.
    • Privacy and Digital Rights: The K.S. Puttaswamy judgement (2017) laid the foundation for privacy jurisprudence in India.
      • Balancing AI’s transformative potential with privacy rights remains a critical concern.
    • Patent Law and AI Creations: Patent eligibility principles and patentability of AI creations need examination. Convergence of AI and patent law raises complex questions.
    • As India navigates the AI era, a comprehensive re-evaluation of digital jurisprudence is essential to address the unique challenges posed by Generative AI.
    • Generative AI holds immense promise, but responsible development and regulation are imperative. As GAI continues to evolve, policymakers, scientists, and industry stakeholders must collaborate to strike the right balance. Effective governance, transparency, and ethical guidelines are essential to harness the potential of GAI while safeguarding against misuse.
    • Digital jurisprudence must adapt swiftly to AI advancements, striking a balance between innovation and legal safeguards. As India embraces digital transformation, robust legal frameworks are essential to navigate this evolving landscape.
    • Proposed regulations, such as requiring digital assistants to self-identify as bots and criminalising fake media, fall short. A more comprehensive approach is needed to ensure accountability and mitigate risks.
    • Policymakers and scientists must strike a balance between innovation and safety, ensuring that GAI benefits society while minimising harm.
    Daily Mains Practice Question
    [Q] How can the existing legal frameworks and judicial precedents in India be effectively adapted to govern the rapidly evolving technology of Generative AI, especially concerning issues related to liability, copyright, and data privacy?