Leading researchers and experts call for the international community to forge an AI treaty in response to the potentially catastrophic risks posed by AI systems, aiming to ensure that AI is developed safely, responsibly, and for the betterment of humanity.

Urging an International AI Treaty: An Open Letter

Table of Contents

We call on governments worldwide to actively respond to the potentially catastrophic risks posed by advanced artificial intelligence (AI) systems to humanity, encompassing threats from misuse, systemic risks, and loss of control. We advocate for the development and ratification of an international AI treaty to reduce these risks, and ensure the benefits of AI for all.

Leading experts, including Geoffrey Hinton, Yoshua Bengio, and the CEOs of OpenAI and Google DeepMind, have publicly voiced their concerns about the catastrophic risks posed by AI, and called for the reduction of this risk to be treated as a global priority. Similarly, AI experts have called for a pause on the advancement of AI capabilities. In recent months, world leaders have increasingly drawn attention to the need to ensure that AI is developed safely.¹ Half of AI researchers estimate more than a 10% chance that AI could lead to human extinction or a similarly catastrophic curtailment of humanity's potential. We consider that the gravity of these risks warrants immediate and serious attention.

We believe the central aim of an international AI treaty should be to prevent the unchecked escalation of the capabilities of AI systems while preserving their benefits. For such a treaty, we suggest the following core components:

  • Global Compute Thresholds: Internationally upheld thresholds on the amount of compute used to train any given AI model, with a procedure to lower these over time to account for algorithmic improvements.
  • CERN for AI Safety: A collaborative AI safety laboratory akin to CERN for pooling resources, expertise, and knowledge in the service of AI safety, and acting as a cooperative platform for safe AI development and safety research.
  • Safe APIs: Enable access to the APIs of safe AI models, with their capabilities held within estimated safe limits, in order to reduce incentives towards a dangerous race in AI development.
  • Compliance Commission: An international commission responsible for monitoring treaty compliance.

Crucially, the effectiveness and impact of such a treaty hinges on widespread agreement across the international community. These measures, combined, promote a safer exploration of AI's immense potential by fostering a cooperative, safety-first approach to AI research and development.

Such an AI treaty has the potential to play a role similar to the International Atomic Energy Agency (IAEA), which was formed to manage nuclear risks and promote the peaceful uses of nuclear energy. Similarly, an AI treaty could not just reduce risks from AI, but also ensure that the benefits of AI are accessible to all. The potential of AI extends far beyond our current understanding and should be viewed as a common good. It is essential that this extraordinary resource benefits all of humanity.

Humanity has shown remarkable unity when faced with global threats, as demonstrated by international cooperation on nuclear non-proliferation. We believe the risks posed by AI systems warrant at least as much caution and coordination.

We urge members of the international community to actively engage in discussions around an AI treaty, and strive towards implementing a robust set of international regulations. We advocate for the formation of a working group, with broad international support, to develop a blueprint for such a treaty. This responsibility does not rest solely on a few shoulders, but on the collective strength of the global community. Our future hangs in the balance. We must act now to ensure AI is developed safely, responsibly, and for the betterment of all humanity.

Add your signature

Fill out this form to add your signature below. You will receive a confirmation email to verify your signature.

Notable Signatory

Signatories

Total confirmed

497

  • Yoshua BengioProfessor / Scientific Director, U. Montreal / Mila, Turing Award
  • Bart SelmanProfessor, Cornell University, Former President of the Association for the Advancement of Artificial Intelligence (AAAI)
  • Max TegmarkProfessor, Future of Life Institute, MIT Center for Brains, Minds & Machines
  • Gary MarcusCEO, Center for the Advancement of Trustworthy AI
  • Yi ZengProfessor, Chinese Academy of Sciences; Founding Director of Center for Long-term AI; Board member for the National Governance Committee of Next Generation AI in China; Member of the UNESCO Ad Hoc Expert Group on AI Ethics, Member of the WHO Expert Group on AI Ethics and Governance for Health; Founding Director of the AI for Sustainable Development Goals Cooperation Network; Founding Director of Defense AI and Arms Control Network; Recently briefed the UN Security Council on AI risks.
  • Victoria KrakovnaSenior Research Scientist, Google DeepMind, Co-founder of the Future of Life Institute
  • Nell WatsonPresident, European Responsible AI Office, Chartered Fellow of BCS, The Chartered Institute for IT (FBCS). Fellow of the Institution of Analysts and Programmers (FIAP). Fellow of the Institute for Innovation and Knowledge Exchange (FIKE). Fellow of the Royal Society of Arts (FRSA). Fellow of the Royal Statistical Society (FRSS), Fellow of the Chartered Management Institute (FCMI), Fellow of the Linnaean Society (FLS). Certified Ethical Emerging Technologist (CEET). Senior Fellow The Atlantic Council. Senior Member IEEE (SMIEEE)
  • Geoffrey OdlumPresident, Odlum Global Strategies, Retired senior US diplomat; Director, National Security Council, The White House (G-8 Affairs, 1998); Director, Office of Iraq Affairs, US State Department (2016); Director, Office of Weapons of Mass Destruction Terrorism, US State Department (2013-2016); Senior Advisor, US State Department, Bureau of Energy Diplomacy (2017)
  • Jaan TallinnCo-Founder of Skype, Centre for the Study of Existential Risk, Future of Life Institute
  • Claire Boucher Artist
  • Huw PriceDistinguished Professor Emeritus, University of Bonn, Emeritus Bertrand Russell Professor & Emeritus Fellow of Trinity College, Cambridge. Co-founder, Centre for the Study of Existential Risk. Academic Director, Leverhulme Centre for the Future of Intelligence (2016–21).
  • Luke MuehlhauserSenior Program Officer, Open Philanthropy; Board Member of Anthropic
  • Daniel DennettProfessor Emeritus, Tufts University, Erasmus Prizw (the Netherlands) American Academy of Arts and Sciences Fellow
  • Priyamvada NatarajanJoseph S. and Sofia S. Fruton Professor of Astronomy & Physics, Yale University, Fellow - American Academy of Arts and Science; American Physical Society; Guggenheim Fellowship
  • Holger HoosAlexander von Humboldt Professor of AI, RWTH Aachen University, Leiden University, CLAIRE, ACM, AAAI and EurAI Fellow, Chair of the Board of CLAIRE, Vice-President of EurAI
  • Felicity ReddelAnalyst, AI Governance, The Future Society
  • Toby OrdSenior Research Fellow, Oxford University
  • Anthony AguirreExecutive Director, Future of Life Institute, Faggin Professor of the Physics of Information, UC Santa Cruz; Co-founder, Metaculus
  • Andrew CritchCEO & Research Scientist, Encultured AI & UC Berkeley
  • Daniel ZieglerML Researcher, Alignment Research Center Evaluations, Former Research Engineer at OpenAI
  • Ramana KumarIndependent, Former Senior Research Scientist at DeepMind
  • Mary-Anne WilliamsDirector, UNSW Business AI Lab, UNSW and Stanford University, Fellow AAAI, ATSE and ACS
  • The Anh HanProfessor of Computer Science and Director of Centre for Digital Innovation, Teesside University
  • Steve NewmanCo-founder of Writely (Google Docs), founder of Scalyr, Independent
  • Jeffrey LadishExecutive Director, Palisade Research
  • Paul NemitzProfessor/Principal Advisor, College of Europe/European Commission
  • Balkan DevlenDirector of the Transatlantic Program, Macdonald-Laurier Institute
  • Connor LeahyCEO, Conjecture
  • Tristan HarrisExecutive Director, Center for Humane Technology
  • Matthijs MaasSenior Research Fellow, AI & Law, Legal Priorities Project | CSER, University of Cambridge
  • Alexis WellwoodAssociate Professor of Philosophy, Linguistics, and Psychology, University of Southern California
  • Robert BraunSenior researcher, Institute for Advanced Studies, Vienna
  • Marius HobbhahnCEO, Apollo Research
  • Katja GraceLead Researcher, AI Impacts
  • Paul SalmonProfessor Human Factors, Centre for Human Factors and Sociotechnical Systems
  • Cornelius HackingDiplomat (retired)
  • Malo BourgonCEO, Machine Intelligence Research Institute
  • Mark BrakelDirector of Policy, Future of Life Institute (FLI)
  • Marta KrzeminskaHead of Operations, Arkose
  • Raúl Monroy BorjaProfessor in Computing, Tecnologico de Monterrey, Former president of Mexican Society for AI; former elected Secretary to Mexican Academy of Computing
  • Alfonso Hing Wan NganChair Professor in Materials Science and Engineering , University of Hong Kong , International Fellow of Royal Academy of Engineering
  • David ManheimHead of Research and Policy, Association for Long Term Existence and Resilience, Visiting Lecturer at Technion - Israel Institute of Technology
  • Giuseppe De GiacomoProfessor in Computer Science, University of Oxford, AAAI fellow, ACM fellow, EurAI fellow, ERA AdG PI
  • Yves MoreauProfessor, University of Leuven, Fellow of the International Society for Computational Biology
  • Denis PoussartComputer Vision and System Laboratory, Scientist Emeritus (retired), Laval University, Canada, Fellow of the Canadian Academy of Engineering
  • Tim HolmesArtist, Body Psalms; defender of the human body, 1st American artist to show in the Hermitage museum
  • Ryan KiddCo-Director, ML Alignment & Theory Scholars (MATS)
  • Joseph MillerResearch Engineer, FAR AI
  • Sören MindermannPostdoctoral researcher , Mila, University of Oxford
  • Robi RahmanData Scientist, Epoch AI and Stanford University
  • Fredrik AllenmarkResearcher, Ludwig Maximilian University of Munich
  • Xiaohu ZhuFounder, Center for Safe AGI
  • Azizi Ab Aziz Associate Professor (Social Artificial Intelligence) , Universiti Utara Malaysia
  • Ethne SwartzProfessor, Montclair State University
  • Helena MatuteProfessor , Deusto University, Spain
  • Max ReddelResearcher | Advanced AI, International Center for Future Generations
  • Jaak TepandiProfessor Emeritus of Knowledge-Based Systems, Tallinn University of Technology
  • Lyantoniette ChuaFounder and Global Convenor , The Ambit
  • Kevin EsveltAssociate Professor, MIT; co-founder of SecureBio and the SecureDNA Foundation
  • Mario GibneyCo-Founder & Advisor, AIGS Canada