Twitter owner, SpaceX and Tesla CEO Elon Musk has sat down for an interview where he discusses various topics, such as the potential risks to the downfall of civilization.
Stemming off of a conversation about how Twitter will continue to hold freedom of speech and democracy as the company's highest virtue in hopes of creating a future that everyone can look forward to, Musk explained that civilization as a whole is much more fragile than what most people think. According to the Twitter owner, if one were to study the rise and fall of civilizations across history, one would derive that civilizations can rise and fall, and in particular, fall, quite quickly. Adding, that during a civilizations peak they aren't thinking about how they will fall.
Continuing off that sentiment, Musk was asked how long he thinks current human civilization has left, which led into the SpaceX CEO beginning to list factors that impact the overall longevity of the human race. Musk began by saying, "well, I am seeing a lot of late-stage civilization vibes these days", adding, "there are so many wildcards." Musk says that in the short term, human civilization will need to battle the financial crisis, geopolitical wildcards, specifically with Ukraine and Taiwan, and the holy grail of potential massive problems, artificial intelligence (AI).

Musk says, "It's called the singularity for a reason because you don't know what's going to happen." The tech billionaire goes on to say, "If you go into a black hole, what happens? You don't know. We're on the AI singularity event horizon, circling the black hole." Musk was asked if AI concerns him more than the other previously listed problems, such as geopolitical problems or nuclear war, to which he said after a long pause, "the good news about Russian roulette is five of the barrels aren't loaded".
The Babylon Bee followed up by saying there must be some form of regulation implemented into AI development to prevent this possible tragic outcome and asked Musk what and how that regulation will be introduced. Musk responded by saying that a small inside community should be formed, and it should include independent players that aren't affiliated with the big AI developers (OpenAI, Google, Microsoft, etc.)
Notably, this committee should have representatives from the big AI developers, but the overall goal is to learn what they can from the emerging industry and implement guidelines where appropriate. Musk says that other industries, such as the aviation or automaker industry, have taken this same approach, and it seems to have worked out well.
Essentially, Musk is calling for a referee to enter the game of AI development, and the goal of this referee is to stop companies from cutting corners that could potentially harm the human race. Lastly, Musk says that whenever a new technology has been created that has the risk of harming the human race, a regulatory body has stepped in to make sure it's safe. AI should be no different.
If you are interested in reading more about this topic, check out this link here.