
Artificial Intelligence: An existential threat to humanity?

Artificial Intelligence: An existential threat to humanity?
(for German click here)
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
The above statement by the Center for AI Safety from May 30, 2023 was signed by many of the world’s leading AI researchers and practitioners. Most notably, the „Godfather of AI“ Geoffrey Hinton and Yoshua Bengio, who received the 2018 Turing Award for their work on neural networks and deep learning.
In addition to numerous other professors and scientists, the CEOs of leading AI companies such as Sam Altman (CEO, OpenAI), Demis Hassabis (CEO, Google DeepMind) or Dario Amodei (CEO, Anthropic) as well as, for example, Microsoft founder Bill Gates, the CTO of Microsoft Kevin Scott or Google’s Senior Vice President of Technology and Society James Manyika, are among the initial signatories.
What is it all about?
At the heart of the debate behind the Declaration is the so-called „alignment problem,“ the challenge of developing AI systems such that their goals and actions are consistent with the goals and values of humanity.
This problem arises because AI is usually designed to maximize a specific goal or perform a specific task assigned to it. However, if the AI is not properly aligned or the goals are not precisely specified, it may perform undesirable or even harmful actions that are not consistent with the intentions of the human developers or society as a whole.
A rather innocuous example of this is an AI whose goal was not to lose in the computer game Tetris. Instead of learning to play Tetris masterfully as intended, the AI simply paused the game and thus achieved the communicated goal of not losing.
The so-called „paper clip thought experiment“ appears more threatening. Imagine an AI system that has superior intelligence to humans (AGI / Artificial General Intelligence) and is asked to produce as many paper clips as possible. Focused on this goal, the AI might come to the conclusion that it is necessary to turn the entire world into a giant paper clip factory. Since humanity would foreseeably oppose such a plan, the AI could make it a necessary side condition of actually achieving its goal to escape human control, e.g., by attempting to end humanity as a whole.
This is, of course, only a thought experiment to illustrate the basic problem. A well-founded derivation of the possible emergence of an AGI threatening the existence of humanity can be found in the essay „How Rogue AIs May Arise“ by Professor Bengio.
At first glance, all of this may still sound like remote science fiction, and indeed, until recently, leading scientists also assumed that such risks could become relevant in the second half of this century at the earliest. However, due to the recent exponential development of advanced AI systems, much shorter timeframes now seem realistic and thus urgent action is required.
“I thought it would happen eventually, but we had plenty of time: 30 to 50 years. I don’t think that any more. […] I wouldn’t rule out a year or two.”
Geoffrey Hinton (The Guardian. 2023)
No one can say for sure how high the risk of such catastrophic events really is, but in a 2022 study, close to 50% of the AI researchers surveyed said they believed the risk of advanced artificial intelligence leading to the extinction of humanity in the long term was at least 10%.
This surely is not an acceptable risk.
So what can we do?
First, despite all the uncertainty, we should take the risks seriously and look for solutions and ways to minimize them.
For a few, it might be an option to work directly as researchers or even in policy to reduce existential risks from AI.
For whom this is not a realistic option, there is also the possibility of getting involved in this area with donations. Already today, independently funded nonprofit organizations play an important role in finding solutions. They conduct independent research, develop best practices, influence policy and inform the public.
Our network of institutes and foundations looking for particularly promising donation opportunities has been active in the field of AI safety for a number of years already. We are in close exchange with organizations such as Open Philanthropy, Longview Philanthropy or Founders Pledge, which have already addressed the relevant risks of AI at an early stage and promoted programs and initiatives aimed at the responsible development and use of AI. These include, for example, the Center for Human-Compatible Artificial Intelligence (CHAI), Redwood Research, and the Center for Security and Emerging Technology (CSET), which are pursuing technical and regulatory approaches in order to reduce the risk of misaligned AI.
In this context, we added the topic area Zukunft bewahren (Preserving the future) to our website in November 2022 to enable tax deductible funding of such initiatives in and from Germany. In exchange with the experts mentioned above, our Donation Fund Zukunft bewahren will specifically focus on promoting high-impact projects and initiatives in this area, which we will describe in detail when we outline the next allocation of our fund resources.
If you would like to learn more about our topic area Preserving the future and our work on responsible AI, or are looking for specific ways to make the most impactful donation in this area, please feel free to contact us.
Weitere Blogposts
Unser Spenden-Fonds: Tierleid verringern in H1/2023
Förderentscheidungen aus unserem Spenden-Fonds: Tierleid verringern aus H1/2023.
Unser Spenden-Fonds: Zukunft bewahren in H1/2023
Förderentscheidungen aus unserem Spenden-Fonds: Zukunft bewahren aus H1/2023 – unser Beitrag zur Verringerung globaler Risiken.
Unser Spenden-Fonds: Klima schützen in H1/2023
Förderentscheidungen aus unserem Spenden-Fonds: Klima schützen aus H1/2023