Nicolas Miailhe on the case for global coordination on AI

Leo Szilard, a nuclear physicist, wrote a letter to the British Admiralty in December 1938. The letter told the British Admiralty that the physicist had given up on his invention – the nuclear chain reaction. Around the same time a research team in Berlin had already started working on the uranium atom. In the next year, the Manhatta Project had already begun and by 1945 the first atomic bomb was dropped on Hiroshima. Four years later Russia tested the first atomic weapon successfully. This nuclear story is very similar to the field of Artificial Intelligence.
Nowadays, people view Artificial Intelligence’s development as a potential global risk. As a result, the technology has to be managed globally. Just like international organisations have reduced the risk of nuclear war, Artificial Intelligence’s risks also need to be reduced globally. Global coordination around AI to mitigate its potential negative impacts is needed.
Nicolas Miailhe is a leading expert on AI’s global coordination problems as well as the founder of The Future Society which is a global nonprofit organisation whose primary goal is to encourage adoption of AI in a responsible way and ensure that governments worldwide will identify the risks of it. Nicolas has graduated from Harvard Kennedy School of Government and is currently giving recommendations about AI policy. A list of Nicolas opinions of AI coordination is given below:
- Customizing phishing attacks at scale can be done by the same AI system which deals with the writing of tedious emails. The opportunities and challenges of AI can be linked. Slowing down the development of AI would definitely be undesirable and so regulation as well as investment are the key to the solution.
- In order to regulate AI, firstly a mutual concept of it needs to be established. International agreements on AI can be made only if people share the same understanding of it.
- Nicolas also claims that addressing disagreements over AI timelines is of significant importance. More precisely, AI researchers have come to the conclusion that human-level general AI will be achieved over the next decade. Consequently, they believe people should pay more attention to AI safety and alignment. The founder of Coursera, Andrew Ng, says that worrying about AI safety is “like worrying about overpopulation on the planet Mars”.
Source:
https://www.youtube.com/watch?v=cKclc-KThIE