Status of Regulating Generative Artificial Intelligence 

    0
    2086

    Syllabus: GS2/ Government Policies & Interventions/GS3/S&T

    In News

    • Governments across the world are grappling with the regulation of Artificial Intelligence. 
    Artificial Intelligence:
    – It is the science and engineering of making intelligent machines, especially intelligent computer programs. 
    – It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.

    About Generative Artificial Intelligence:

    • Generative AI is a type of artificial intelligence technology that can produce various types of content, including text, imagery, audio and synthetic data.
    • Generative AI models learn the patterns and structure of their input training data and then generate new data that has similar characteristics.
    • Examples: ChatGPT, DALL-E, and Bard are examples of generative AI applications that produce text or images based on user-given prompts or dialogue.

    Benefits 

    • Increasing productivity by automating or speeding up tasks
    • Removing or lowering skill or time barriers for content generation and creative applications
    • Enabling analysis or exploration of complex data 
    • Using it to create synthetic data on which to train and improve other AI systems
    • AI would not replace people but create new opportunities in various fields. 
    • AI is creating new opportunities which could not be achieved by traditional technology.
    • Overall, generative AI has the potential to enable efficiency and productivity across multiple industries and applications at scale. 

    Challenges

    • Risks for the education sector:
      • In the education sector, we notice that there is no control on how generative AI tools are used by students. There are no age restrictions or Content restrictions
      • Also, there is hardly any awareness initiative on the potential risks of using generative AI tools in education. 
      • These tools have extreme long-term negative effects on critical thinking and the creative capacities of students.
    • Potential for escalate existing threats:
      • Generative AI is compounding or can compound some existing online threats like the use of deepfakes for disinformation campaigns
      • This can include simple things like using ChatGPT to make phishing emails sound convincing. 
      • There are multiple ways in which cheaper and more accessible generative AI models can compound issues that we’re still struggling to regulate, especially in cybersecurity and online harms.
      • These can threaten the basic foundations of democracy. There could be the impact of these generative AI tools on fair and transparent elections.
    • Ambiguity with respect to copyright:
      • Most of the output generated by AI tools today is outside copyright protection.
        • There has been demand for copyright protections being given to companies which are involved in generative AI. 
        • The position that the U.S. Copyright Office has taken is that there will be no copyright over these [AI-developed] works when it is not authored by a human.
        • In contrast, India recently granted joint authorship to a work which was generated by AI but later issued a withdrawal notice citing controversy surrounding it.

    Global Regulations of Artificial Intelligence

    • Governments across the world are grappling with the regulation of AI.
      • It is the responsibility of global leaders to guarantee the secure and safe deployment of Generative AI. 
      • Until recently, the greatest advances in the regulation of AI have been made in the European Union (EU), Brazil, Canada, Japan and now, China.
        • EU’s Artificial Intelligence Act takes a risk-based approach to regulate Artificial Intelligence technologies, categorising them into unacceptable, high-risk, and limited risk accompanied by corresponding compliance requirements
        • Japan: the Japanese government’s Integrated Innovation Strategy Promotion Council has framed a set of rules called the “Social Principles of Human-Human-Centric AI”. It manifests the basic principles of an AI-capable society
        • The first part contains seven social principles that society and the state must respect when dealing with AI: human-centricity; education/literacy; data protection; ensuring safety; fair competition; fairness, accountability and transparency, and innovation.

    Way Ahead

    • Overall, generative AI has the potential to enable efficiency and productivity across multiple industries and applications at scale.
      • However, if not designed and developed responsibly with appropriate safeguards, Generative AI can create harm and adversely impact society through misuse, perpetuating biases, exclusion, and discrimination
    • Therefore, we must responsibly develop AI technology, enforce ethical guidelines, conduct regular audits for fairness, identify and address biases, and protect privacy and security. 
    • Thus, systems based on AI must be regulated & it is for India to frame its regulations.
      • To address this specific issue, India requires two things:
        • A comprehensive regulatory framework, comprising both,
          • Horizontal regulations that would be applicable across sectors and 
          • Vertical regulations which are sector-specific. 
        • More clarity on data protection: If we look at the Digital Personal Data Protection (DPDP) Act, 2023, one can notice that it does not apply to any personal data that was publicly made available by the user to whom the data relates.
          • This, in effect, legitimises all the scrapping that was done by these AI companies. These are the areas where we need to have a more nuanced approach.
    Mains Practise Question 
    [Q] What could be the potential threats of generative AI tools on the basic foundations of India’s democracy? Analyse global models & approaches to effectively regulate Artificial Intelligence based applications.