{"id":20089,"date":"2023-12-29T21:30:42","date_gmt":"2023-12-29T16:00:42","guid":{"rendered":"https:\/\/www.nextias.com\/ca\/?p=20089"},"modified":"2023-12-30T17:26:25","modified_gmt":"2023-12-30T11:56:25","slug":"dangers-of-ai","status":"publish","type":"post","link":"https:\/\/www.nextias.com\/ca\/editorial-analysis\/29-12-2023\/dangers-of-ai","title":{"rendered":"Dangers of AI"},"content":{"rendered":"\n<p><strong>Syllabus: GS3\/Developments in Science and Technology<\/strong><\/p>\n\n\n\n<p><strong><span style=\"text-decoration: underline;\">Context<\/span><\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>2023 was perceived by both industry leadership and the populace as one where <strong>artificial intelligence had a significant impact on social and economic relations<\/strong>.&nbsp;<\/li>\n<\/ul>\n\n\n\n<p><strong><span style=\"text-decoration: underline;\">About<\/span><\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>This impact was visible due to the <strong>apparent success of large language models, a family of generative models, in solving complex tasks<\/strong>.&nbsp;<\/li>\n\n\n\n<li>The year started with <strong>Microsoft deciding to invest $10 billion in the OpenAI project<\/strong>, as its <strong>ChatGPT <\/strong>became t<strong>he fastest-growing application<\/strong>.&nbsp;<\/li>\n\n\n\n<li><strong>Google introduced its chatbot, Bard<\/strong>, Amazon introduced <strong>Bedrock<\/strong>, giving its customers access to large language models of its own called <strong>Titan<\/strong>.&nbsp;<\/li>\n<\/ul>\n\n\n\n<p><strong><span style=\"text-decoration: underline;\">Dangers of AI<\/span><\/strong><\/p>\n\n\n\n<p>The benefits of AI are so large, from healthcare, defence and defence that the <strong>downsides are often ignored.&nbsp;<\/strong><\/p>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-background\" style=\"background-color:#ebecf0\"><tbody><tr><td><span style=\"text-decoration: underline;\"><strong>Do you know?<\/strong><br><\/span><strong>&#8211; The AI safety letter,<\/strong> which was signed by more than 2,900 industry experts and academics, called for a <strong>six-month halt on training AI systems more powerful than GPT4 <\/strong>as AI could prove to be an <strong>existential threat.<\/strong><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Privacy:<\/strong> The<strong> data hunger of AI<\/strong> might have implications on both <strong>diluting privacy and on labour conditions<\/strong> of platform workers.<\/li>\n\n\n\n<li><strong>Surveillance: <\/strong>AI\u2019s opaque workings might have impacts on democratic processes when AI systems are used in public-use cases like <strong>surveillance and policing.<\/strong>\n<ul class=\"wp-block-list\">\n<li>AI, with its <strong>ability to drive face-recognition<\/strong> and compare huge data streams, c<strong>an give governments 360- degree 24&#215;7 profiles of all citizens<\/strong>, making dissent against authoritarian regimes more difficult.<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Deep fakes:<\/strong> By cloning voices and faces, it can deliver authentic-seeming fake news, or scam people, or bypass security.\n<ul class=\"wp-block-list\">\n<li>Eg. <strong>A lady in Gurugram was recently scammed for Rs 11 Lakh using deep fakes on Skype<\/strong>, scammers impersonating as Senior officials of CBI.<\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>Perpetuates biases<\/strong>: Algorithms trained on current data may recommend that <strong>only males get STEM scholarships, and only upper-caste people get bank loans.<\/strong><\/li>\n\n\n\n<li><strong>Nuclear threat:<\/strong> If AI is used to control nuclear missile systems, it could cause extinction, as some experts have warned.<\/li>\n<\/ul>\n\n\n\n<p><strong><span style=\"text-decoration: underline;\">Mitigating the Dangers:<\/span><\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Ethical development and governance:<\/strong> Establishing ethical frameworks and principles for AI development and deployment is crucial to ensure responsible and beneficial advancement.<\/li>\n\n\n\n<li><strong>Transparency and accountability:<\/strong> AI systems should be transparent and accountable for their decisions, allowing for human oversight and intervention.<\/li>\n\n\n\n<li><strong>Education and awareness:<\/strong> Raising public awareness about the potential dangers of AI and educating citizens about their rights and responsibilities in the digital age is essential.<\/li>\n\n\n\n<li><strong>Collaboration and international cooperation:<\/strong> Addressing AI risks effectively requires global collaboration and coordinated efforts among governments, researchers, and the private sector.\n<ul class=\"wp-block-list\">\n<li>Some have proposed the setting up of an<strong> \u201cInternational Agency for Artificial Intelligence\u201d (IAAI),<\/strong> much like the International Atomic Energy Agency (IAEA) that was set up to regulate the uses of nuclear energy.&nbsp;<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n\n\n\n<figure class=\"wp-block-table\"><table class=\"has-background\" style=\"background-color:#fff2cc\"><tbody><tr><td><span style=\"text-decoration: underline;\"><strong>Global efforts to regulate AI:<\/strong><br><\/span><strong>&#8211; Europe\u2019s<\/strong> <strong>AI Act:&nbsp;<\/strong>It was passed in December 2023 and has concrete red lines like <strong>prohibiting arbitrary and real-time remote biometric identification<\/strong> in public spaces for law enforcement, <strong>bans emotion detection,<\/strong> etc.<br>&#8211; In November 2023, the \u2018<strong>Bletchley Declaration\u2019 by AI Safety Summit <\/strong>called for,<br>A. to work together in an inclusive manner to ensure <strong>human centric, trustworthy and responsible AI,<\/strong><br>B. <strong>AI that is safe, and supports the good of all<\/strong> through existing international fora and other relevant initiatives, and<br>C. <strong>to promote cooperation to address the broad range of risks<\/strong> posed by AI.<br>&#8211; In July 2023, the <strong>US government <\/strong>announced that it had persuaded the companies OpenAI, Microsoft, Amazon, Anthropic, Google, Meta, etc<strong> to abide by \u201cvoluntary rules\u201d to \u201censure their products are safe\u201d.&nbsp;<\/strong><\/td><\/tr><\/tbody><\/table><\/figure>\n\n\n\n<p><strong><span style=\"text-decoration: underline;\">Way Ahead<\/span><\/strong><\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>It is unlikely that the advancements in AI and research into mitigation and responsible usage<\/strong> will proceed at the same pace.\n<ul class=\"wp-block-list\">\n<li>Hence, countries should acknowledge this and should <strong>develop safeguards fast enough to prevent catastrophic harm.<\/strong><\/li>\n<\/ul>\n<\/li>\n\n\n\n<li><strong>By acknowledging the potential dangers of AI and proactively taking steps<\/strong> to mitigate them, we can ensure that this transformative technology serves humanity and contributes to a safer, more equitable future.<\/li>\n<\/ul>\n\n\n\n<p><strong>Source: <\/strong><a href=\"https:\/\/indianexpress.com\/article\/opinion\/columns\/ai-in-2024-dangers-hope-9086241\/\" rel=\"nofollow noopener\" target=\"_blank\"><strong>IE<\/strong><\/a><\/p>\n\n\n\n<div class=\"wp-block-buttons is-layout-flex wp-block-buttons-is-layout-flex\">\n<div class=\"wp-block-button\"><a class=\"wp-block-button__link wp-element-button\" href=\"https:\/\/www.nextias.com\/ca\/wp-content\/uploads\/2023\/12\/Daily-Editorial-Analysis-29-12-2013.pdf\">Download PDF<\/a><\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>Syllabus: GS3\/Developments in Science and Technology Context About Dangers of AI The benefits of AI are so large, from healthcare, defence and defence that the downsides are often ignored.&nbsp; Do you know?&#8211; The AI safety letter, which was signed by more than 2,900 industry experts and academics, called for a six-month halt on training AI [&hellip;]<\/p>\n","protected":false},"author":4,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"footnotes":""},"categories":[22],"tags":[],"class_list":["post-20089","post","type-post","status-publish","format-standard","hentry","category-editorial-analysis"],"acf":[],"jetpack_featured_media_url":"","_links":{"self":[{"href":"https:\/\/www.nextias.com\/ca\/wp-json\/wp\/v2\/posts\/20089","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.nextias.com\/ca\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.nextias.com\/ca\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.nextias.com\/ca\/wp-json\/wp\/v2\/users\/4"}],"replies":[{"embeddable":true,"href":"https:\/\/www.nextias.com\/ca\/wp-json\/wp\/v2\/comments?post=20089"}],"version-history":[{"count":5,"href":"https:\/\/www.nextias.com\/ca\/wp-json\/wp\/v2\/posts\/20089\/revisions"}],"predecessor-version":[{"id":20121,"href":"https:\/\/www.nextias.com\/ca\/wp-json\/wp\/v2\/posts\/20089\/revisions\/20121"}],"wp:attachment":[{"href":"https:\/\/www.nextias.com\/ca\/wp-json\/wp\/v2\/media?parent=20089"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.nextias.com\/ca\/wp-json\/wp\/v2\/categories?post=20089"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.nextias.com\/ca\/wp-json\/wp\/v2\/tags?post=20089"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}