Syllabus: GS3/Science & Technology
Context
- The emergence of highly advanced models such as Anthropic’s Mythos, which has the ability to autonomously discover and exploit vulnerabilities in critical infrastructure, has made governance an urgent global priority.
AI and the Changing Nature of Cybersecurity
- Artificial Intelligence (AI) is rapidly transforming the cybersecurity landscape, creating both unprecedented opportunities and systemic risks.
- It has significantly enhanced cybersecurity capabilities by enabling real-time threat detection, predictive analytics for cyber risks, and automation of defence mechanisms.
- However, the same capabilities can be weaponised. Advanced AI systems can identify zero-day vulnerabilities, execute multi-stage cyberattacks autonomously, and target critical infrastructure such as banking, energy, and telecom.
- AI-driven systems can automate vulnerability discovery, while the rise of agentic AI, which operates with minimal human intervention, makes traditional security frameworks inadequate.
Case of Anthropic’s Mythos
- The Mythos model exemplifies the dual-use dilemma of AI. It can strengthen cyber defences, however, its ability to exploit system weaknesses poses serious threats:
- Potential misuse by non-state actors
- Risk of unauthorised access
- Capability to outmatch human cybersecurity experts
- Such developments have made AI governance an immediate necessity.
Governance Challenges in the AI Era
- Regulatory Gaps: Existing cybersecurity laws are not designed for self-learning AI systems.
- India’s cyber laws struggle to accommodate AI-driven threats, necessitating updated frameworks.
- Lack of Global Consensus: AI risks transcend national boundaries. There is a need for cyber diplomacy and international cooperation, as cyber threats to critical infrastructure have global spillovers.
- Accountability and Transparency (Opacity Problem): AI systems often function as ‘black boxes’, making it difficult to assign responsibility.
- Private Sector Dominance: The development and deployment of powerful AI models are largely controlled by private corporations, raising concerns about accessibility, safety, and accountability in case of misuse.
Challenges Specific To India
- Financial & Critical: AI models like Mythos pose risks to financial systems and critical infrastructure.
- The government has initiated high-level consultations to assess threats.
- Regulatory: Fragmented regulatory frameworks; and insufficient preparedness for AI-driven cyber threats.
- Strategic: India’s large digital ecosystem like UPI, Aadhaar, telecom networks makes it particularly vulnerable.
- At the same time, its scale positions it as a key stakeholder in global AI governance.
Related Global Efforts in AI, Cybersecurity, and Governance
- Trends in Global Governance: Across jurisdictions, some common trends are visible, like:
- Shift from voluntary guidelines to binding regulations
- Adoption of risk-based frameworks
- Growing importance of AI safety testing before deployment
- Recognition of dual-use nature of AI in cybersecurity
- Risk-Based Regulatory Model (EU): The EU has taken the lead with the AI Act, which adopts a risk-based classification:
- High-risk AI (critical infrastructure, law enforcement) faces strict compliance
- Emphasis on transparency, accountability, and human oversight.
- Security-Centric Approach (USA): It focuses on national security and innovation balance:
- Executive Orders on AI safety and cybersecurity
- Collaboration between AI firms and government agencies
- Controlled access to powerful models
- AI Safety and Frontier Models (UK): The UK has established institutions such as the AI Safety Institute:
- Focus on frontier AI risks, including autonomous cyberattacks
- Promotes international cooperation on AI safety standards
- State-Controlled AI Governance (China): China follows a centralised regulatory model:
- Strict controls on AI deployment and data flows
- Integration of AI governance with national security objectives
Multilateral Initiatives
- G7 Hiroshima AI Process: Focus on safe, secure, and trustworthy AI; and encourages voluntary codes of conduct.
- OECD AI Principles: Promote responsible AI use, human rights, and transparency
- United Nations Efforts: Discussions on global AI governance frameworks; and emphasis on inclusive participation of developing nations.
- Cybersecurity-Specific Global Cooperation:
- NATO Cooperative Cyber Defence Centre: Focus on AI-enabled cyber threats
- Global Forum on Cyber Expertise (GFCE): Capacity building for cybersecurity; and increasing emphasis on AI-driven cyber defence collaboration.
India’s Perspective
- Policy: India emphasises the need for secure digital infrastructure and responsible AI deployment, and has taken several steps to address cybersecurity and AI governance:
- National Cyber Security Policy (2013) and subsequent updates
- Digital Personal Data Protection Act (2023)
- NITI Aayog’s National Strategy for AI
Way Forward: Global Governance
- International Cooperation: Establish global norms and treaties for AI use; and promote information sharing on cyber threats.
- Risk-Based Regulation: Categorise AI systems based on risk (low, medium, high); and impose stricter controls on high-risk systems.
- Public-Private Collaboration: Governments and tech companies need to collaborate on secure deployment; Initiatives like controlled access (e.g., Mythos testing) can help.
- Strengthening Domestic Frameworks: Update cyber laws to include AI-specific provisions; and build institutional capacity for monitoring and enforcement.
- Inclusion of Developing Countries: India and other emerging economies need to have a seat at the global table, as they are major data providers and AI markets.
Previous article
Anti-Defection Law