In a new episode of the Lex Fridman Podcast, Max Tegmark has sat down to discuss the multitude of reasons behind the push for the development on artificial intelligence to be temporarily halted.
Max Tegmark, a physicist and AI researcher at the Massachusetts Institute of Technology (MIT) and co-founder of the Future of Life Institute, a nonprofit organization that works to reduce global catastrophic and existential risks facing humanity, mostly focusing on the impact of artificial intelligence, has answered several questions about AI's role in civilization, starting with the simple question of "do you think there is intelligent life out there in the universe?".
Tegmark explains that based on what humans have discovered as what is deemed the 'observable universe', he would fall into the minority ground that believes humans are the only intelligent life that has created technology as advanced as what we have today. Additionally, Tegmark says that if this theory is true, it would put a lot of responsibility on humanity, being the most advanced form of life in the universe, to "not stuff this up".

The MIT physicist says that through the exponential development of technology, humans run the risk of snuffing out what may be the only single spark of intelligent life in the universe and that one piece of technology is at the forefront of not only his concern but many others colleagues and peers that share similar views.
Despite Tegmark's doubt of finding another intelligent form of life in the cosmos, the AI researcher explains that he does think humans will "get visited by intelligence quite soon, but I think we will be building that alien intelligence". Following up on Tegmark's answer, Fridman reiterates that this alien intelligence would be separate from natural evolution as it wouldn't be possible to create from the traditional biology path.
Tegmark further explains that since this alien intelligence would be removed from evolution, it wouldn't possess natural Darwinian traits such as being afraid of death, caring for others, or self-preservation.
Additionally, Tegmark says that the space of what can be built into an alien mind is much larger than the evolutionary space where one may be created. Constructing an alien mind enables a larger amount of possibilities in terms of variation, but it also means a greater responsibility when it comes to safety. If you are interested in learning more about the dangers of artificial intelligence, listen to the full Lex Fridman Podcast with Max Tegmark.
In other AI news, Google's DeepMind CEO has warned that the way artificial intelligence is being developed may lead to it becoming self-aware.