{"id":38552,"date":"2025-03-05T18:58:26","date_gmt":"2025-03-05T13:28:26","guid":{"rendered":"https:\/\/www.nextias.com\/ca\/?p=38552"},"modified":"2025-03-05T18:58:28","modified_gmt":"2025-03-05T13:28:28","slug":"designing-india-ai-safety-institute","status":"publish","type":"post","link":"https:\/\/www.nextias.com\/ca\/current-affairs\/05-03-2025\/designing-india-ai-safety-institute","title":{"rendered":"Designing India\u2019s AI Safety Institute"},"content":{"rendered":"\n<p><strong>Syllabus :GS 3\/Science and Tech&nbsp;<\/strong><\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>In News<\/strong><\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Union Minister Ashwini Vaishnaw announced India will launch an indigenous AI model and establish an AI Safety Institute (AISI) under the<strong> IndiaAI Mission<\/strong> to ensure safe and trusted AI development.<\/li>\n<\/ul>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-background\" style=\"background-color:#ebecf0\"><tbody><tr><td><strong>Global Scenarios\u00a0<\/strong><br>&#8211; Countries like the U.K., U.S., Singapore, and Japan have set up AI Safety Institutes (AISIs) to address AI risks, with a focus on global collaboration and technical understanding.<br>1. U.K.&#8217;s AISI launched the open-source platform \u2018Inspect\u2019 for evaluating AI models.<br>2. U.S.&#8217;s AISI formed an inter-departmental taskforce to address AI risks related to national security and public safety.<br>3. Singapore\u2019s AISI focuses on safe model design and rigorous testing.<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>India\u2019s AI Safety Institute<\/strong><\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>The AISI will operate under the Safe and Trusted Pillar and will focus on addressing AI risks.<\/li>\n\n\n\n<li>India\u2019s AISI will collaborate with academics, startups, industry, and government to address India\u2019s socioeconomic, linguistic, and technological challenges.<\/li>\n\n\n\n<li>India\u2019s AISI will develop indigenous tools and frameworks that prioritize responsible AI while ensuring interoperability with global AI safety networks.<\/li>\n\n\n\n<li>India\u2019s collaboration with MeitY and UNESCO will help identify gaps in AI ethics and development.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Need<\/strong><\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>The Bletchley Declaration from the U.K. AI Safety Summit highlights global threats like cybersecurity and disinformation.<\/li>\n\n\n\n<li>India\u2019s vibrant startup ecosystem, like Karya, is tackling issues like unrepresentative data and multilingual AI development for inclusivity.<\/li>\n\n\n\n<li>The <strong>Economic Survey 2024-25<\/strong> highlighted that India\u2019s workforce in low-skill and low-value-added services remains vulnerable to AI disruptions.\n<ul class=\"wp-block-list\">\n<li>It recommended creating \u201crobust institutions\u201d to help workers transition to medium-and high-skilled jobs, where AI can augment rather than replace them.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Importance\u00a0<\/strong><\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>India\u2019s AI Safety Institute can champion local concerns, such as bias, discrimination, social exclusion, gendered risks, and individual privacy.<\/li>\n\n\n\n<li>It can influence global discussions on AI risks, mitigations, red-teaming, and standardization.<\/li>\n\n\n\n<li>It is a key step in creating a standardized AI safety taxonomy for consistent understanding and communication among stakeholders.<\/li>\n\n\n\n<li>India can position itself as a unifying voice for the global majority in AI governance, building on its leadership in G20 and the Global Partnership on AI (GPAI).<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Suggestions and Way Forward\u00a0<\/strong><\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>India\u2019s AISI needs to strike a balance between local relevance and global alignment by adopting international standards while adapting them to India\u2019s context.<\/li>\n\n\n\n<li>India\u2019s AISI should help create a global framework to share information about AI models and their potential impacts, promoting transparency.<\/li>\n\n\n\n<li>India can lead AI safety efforts in the Global South by co-developing AI safety frameworks and evaluation metrics to address local challenges.<\/li>\n<\/ul>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-background\" style=\"background-color:#fff2cc\"><tbody><tr><td><strong>Do you know?<\/strong><br>&#8211; IndiaAI Mission was launched on March 7th, 2024 to enhance India&#8217;s global leadership in AI and ensure its benefits reach all sectors of society.<br>&#8211; The Mission has introduced 7 key pillars to strengthen India\u2019s AI ecosystem.<br>&#8211; It emphasizes developing indigenous technical tools, guidelines, frameworks, and standards that address India\u2019s unique challenges and opportunities, including its social, cultural, linguistic, and economic diversity.<\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p><strong>Source :TH<\/strong><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Union Minister Ashwini Vaishnaw announced India will launch an indigenous AI model and establish an AI Safety Institute (AISI) under the IndiaAI Mission to ensure safe and trusted AI development.<\/p>\n","protected":false},"author":15,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[21],"tags":[],"class_list":["post-38552","post","type-post","status-publish","format-standard","hentry","category-current-affairs"],"acf":[],"jetpack_featured_media_url":"","_links":{"self":[{"href":"https:\/\/www.nextias.com\/ca\/wp-json\/wp\/v2\/posts\/38552","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.nextias.com\/ca\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.nextias.com\/ca\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.nextias.com\/ca\/wp-json\/wp\/v2\/users\/15"}],"replies":[{"embeddable":true,"href":"https:\/\/www.nextias.com\/ca\/wp-json\/wp\/v2\/comments?post=38552"}],"version-history":[{"count":1,"href":"https:\/\/www.nextias.com\/ca\/wp-json\/wp\/v2\/posts\/38552\/revisions"}],"predecessor-version":[{"id":38553,"href":"https:\/\/www.nextias.com\/ca\/wp-json\/wp\/v2\/posts\/38552\/revisions\/38553"}],"wp:attachment":[{"href":"https:\/\/www.nextias.com\/ca\/wp-json\/wp\/v2\/media?parent=38552"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.nextias.com\/ca\/wp-json\/wp\/v2\/categories?post=38552"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.nextias.com\/ca\/wp-json\/wp\/v2\/tags?post=38552"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}