help reviewing email to politician
I sent the pause AI template email to a politician a couple weeks ago, but it didn’t work, so I wanted to see if this next email sounded any more reasonable/persuasive.
“Mitigating the risk of extinction from AI should be a global priority, alongside other societal scale threats such as pandemics and nuclear war.” Is a statement published by the centre of AI safety, and signed by multiple Nobel prize winners, professors of computer science at Harvard, Oxford and Cambridge, the top three most cited AI researchers ever and hundreds more scientists of similar caliber.
The science behind the problem takes three paragraphs to explain.
1. AI is becoming smarter than humans, today they outperform 93% of programmers at coding, can design working rocket engines, and outperform PhD students in exams. An AI called MegaSyn has invented 40,000 new possible chemical weapons in six hours, some of which were similar to VX, the most potent nerve agent ever developed.
2. AI has an unfortunate habit of doing things we did not program them to do. E.g: we wanted to teach an AI to walk in a video game, so we gave it a pair of digital legs and tasked it to get from point A to point B. The scientists were surprised when it simply flopped on its side and started gliding from point A to B, it shouldn’t have been possible but the AI had exploited bugs the programmers weren’t aware of to achieve its goal in a way the scientists didn’t intend. It’s a trivial example but has loads of real world implications favouring men over women in hiring processes, encouraging users to harm themselves, or telling users how to make bombs. From this evidence we can conclude that smarter than human AI will not be aligned with human values. For more examples click here.
“Mitigating the risk of extinction from AI should be a global priority, alongside other societal scale threats such as pandemics and nuclear war.” Is a statement published by the centre of AI safety, and signed by multiple Nobel prize winners, professors of computer science at Harvard, Oxford and Cambridge, the top three most cited AI researchers ever and hundreds more scientists of similar caliber.
The science behind the problem takes three paragraphs to explain.
1. AI is becoming smarter than humans, today they outperform 93% of programmers at coding, can design working rocket engines, and outperform PhD students in exams. An AI called MegaSyn has invented 40,000 new possible chemical weapons in six hours, some of which were similar to VX, the most potent nerve agent ever developed.
2. AI has an unfortunate habit of doing things we did not program them to do. E.g: we wanted to teach an AI to walk in a video game, so we gave it a pair of digital legs and tasked it to get from point A to point B. The scientists were surprised when it simply flopped on its side and started gliding from point A to B, it shouldn’t have been possible but the AI had exploited bugs the programmers weren’t aware of to achieve its goal in a way the scientists didn’t intend. It’s a trivial example but has loads of real world implications favouring men over women in hiring processes, encouraging users to harm themselves, or telling users how to make bombs. From this evidence we can conclude that smarter than human AI will not be aligned with human values. For more examples click here.