The substantial implications for international peace and security of the development of artificial intelligence (AI) and machine learning (ML) in terms of beneficial use, malicious use, and potential inadvertent misuse are growing increasingly clear as the technology advances. Many private businesses and academic institutions recognize the benefits from AI and ML, but fewer are studying the potential harmful use cases. AI and ML offers malicious actors – ranging from criminals to terrorists, oppressive regimes to economic adversaries – the ability to carry out increasingly complex and effective attacks on individuals, groups, businesses, and democratic institutions. Threats range from: individually targeted spear phishing and hacking efforts; use of facial recognition technology; and ‘deepfake’ videos of public figures or members of vulnerable communities giving phony or potentially incendiary speeches. At present, research in this decentralized field is progressing faster than policy solutions to address its implications, necessitating increased focus on the part of institutions and governments. While the United Nations sees regular, intense debate on cyber policy, AI and ML have largely been overlooked, and the matter has not yet been meaningfully discussed in the Security Council.
The Capstone Workshop will analyze the experience of the UN with threats from modern technologies; research and analyze government policies, incentives and liabilities with respect to AI and ML; and categorize the main operational, political, and bureaucratic challenges of the UN Security Council to developing a response to threats posed by AI and ML. With this information, the Capstone will recommend how the UN Security Council could develop common principles for the international community that would include a “culture of responsibility” and guidelines for governments and researchers to collaborate on investigating, preventing, and mitigating potential malicious uses of AI; and measures to promote AI safety.