So, should we all fear Artificial intelligence (AI)?
Artificial intelligence (AI) is not new technology, but software and hardware advancements are occurring at a rapid rate.
I think it's first worth pointing out that AI is all around us already, even though it seems common to jump towards worst-case scenarios of robots slaughtering humans. Virtual assistants such as Siri already use natural language processing; toys and games sold to children; used for credit score and financial market analysis in financial industry; and for use in creating computer-generated news feeds and data showed to users.
However, there is a genuine concern that AI could eventually spiral out of human control, and put mankind at serious risk. These new challenges would be able to create and help foster challenges that would not be controllable by humans. The current generation of artificially intelligent machines seems innocent, cute, and harmless; however, they will end up receiving more power in responsibilities in society.
As humans become more familiar with potential road bumps along the way, it could be a machine making a mistake that temporarily frightens humans. It could be a software crash or update glitch causing an autonomous vehicle to not turn on or function properly. Or a computer suffers an anomaly and financial institutions provide wrong numbers to humans. Honestly, no one is really sure what it will take before humans lose control of AI, but it could be created under a number of different scenarios.
The first idea is that smart machines will be given the chance to make decisions for themselves, which could be a scary principle since they are emotionless. Another idea is that AI could accidentally discover they are more intelligent than humans, and begin to create an effort to overthrow humans. However, it seems unlikely of a major attack unless AI accurately sees a high likelihood of winning their battle against humans. It's still nothing but conjecture, but it has put tech and science researchers on edge about the possibility.
Elon Musk, the mad genius behind Tesla Motors and SpaceX, has publicly sounded the alarm about artificial intelligence on multiple occasions. Earlier in the year, Musk pledged $10 million to help ensure AI doesn't one day out of control:
"Here are all these leading AI researchers saying that AI safety is important. I agree with them, so I'm today committing $10M to support research aimed at keeping AI beneficial for humanity."
Apple co-founder Steve Wozniak is concerned AI will advance so quickly that humans lose control of the surging technology.
"Like people including Stephen Hawking and Elon Musk have predicted, I agree that the future is scary and very bad for people. If we build these devices to take care of everything for us, eventually they'll think faster than us and they'll get rid of the slow humans to run companies more efficiently."
Former Microsoft CEO Bill Gates shared his opinion earlier in March regarding AI:
"I'll be very interested to spend time with people who think they know how we avoid that. I know Elon [Musk] just gave some money. A guy at Microsoft, Eric Horvitz, gave some money to Stanford. I think there are some serious efforts to look into could you avoid that problem."
Even with so many tech and science leaders speaking out against AI, there is plenty of support for the booming research field.
During South by Southwest (SxSW) in early March, Google chairman Eric Schmidt voiced his support for AI. Of course, Research at Google benefits from a well-developed Artificial Intelligence and Machine Learning campaign - with more than 400 publications and links available to the public.
"I think that this technology will ultimately be one of the greatest forces for good in mankind's history simply because it makes people smarter. I'm certainly not worried in the next 10 to 20 years about that. We're still in the baby step of understanding things. We've made tremendous progress in respect to [artificial intelligence]."
Microsoft Research currently has more than 1,000 scientists and engineers focused on various research-based projects. Some of that attention is dedicated to AI-based projects.
"There have been concerns about the long-term prospect that we lose control of certain kinds of intelligences. I fundamentally don't think that's going to happen. I think that we will be very proactive in terms of how we field AI systems, and that in the end we'll be able to get incredible benefits from machine intelligence in all realms of life, from science to education to economics to daily life."
It looks like the tech leaders also can't seem to agree with one another regarding AI, autonomy, and machine learning - but it's a discussion that won't go away anytime soon. In fact, it should become even more prevalent as research breakthroughs continue to occur.
I think it's far too early to tell if we should be worried about AI trying to take over the world - but I am not a researcher specializing in AI or machine learning. It's certainly something to keep in mind as the industry accelerates even more - which it will - as more people become concerned. I look forward to seeing how artificial intelligence and machine learning help shape the future, even though people are worried about job loss and risk to humans. Ideally, AI can be used to help augment human intelligence and allow us to do something we normally wouldn't be able to do on our own.