The threat of the extinction of humanity posed by AI is on a level with that of a pandemic or a nuclear war.
This warning comes from a bunch of experts, including AI scientists, professors, and tech luminaries, including high-up members of Google/Alphabet (and DeepMind), OpenAI (the maker of ChatGPT), and the CTO and Chief Science Officer at Microsoft (one of the biggest proponents of AI currently, with Bing, and now Copilot).
It also includes a bunch of authors who have written go-to textbooks on AI and deep learning, and a trio of Turing Award winners, plus many, many others. It's quite the heavyweight backing here.
The statement, officially issued by the Center for AI Safety, is short and sweet, reading:
"Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war."
This is the latest of a clutch of warnings in recent times, putting a strong case for the tech world to be careful around AI advances, rather than just plowing on, head down, regardless - which very much seems to be the temptation thus far.
Google and Microsoft are both absolutely forging ahead with their respective Bard and Bing AIs as fast as possible, with more concern about falling behind the rival chatbot than any worries about what impact this kind of advancement might have on society at large.
So, it's particularly interesting to see prominent execs from those firms signing this statement, although it's one thing to profess concern - and another thing to do something about it. (Something that might hold back your latest and greatest hope to continue to dominate search - or challenge Google's dominance in Microsoft's case - and the web at large).
Reaction on Twitter has been predictably polarized: this is either very much a concern, or it's alarmist and fear-mongering.
Although in fairness, and at the risk of stating the very obvious, a lot of what happens will boil down to exactly how we use AI. In other words, the danger itself isn't AI, but how we shape and evolve it, and what we do with it.
There are certainly those who argue that putting measures in place now to guide the growth of AI is key to the future, when far more sophisticated incarnations - reaching AGI or artificial general intelligence - are in play. (Although the definition of AGI itself is controversial, as we've explored elsewhere).
For us, while it may not be an existential threat as such, the kind of AI which is being ushered in now - large language models (LLMs) - certainly does hold perils and pitfalls we need to be very careful around. Principally the threat to jobs, which is very definitely real - in a corporate world where profit and shareholders are generally the prime concern, not workers - and also the effect on the creative arena (art and music, and yes, written content too).