The CEO of OpenAI, the developers behind the viral sensation ChatGPT, has warned of competing artificial intelligence systems that will have fewer safety regulations in place.
Sam Altman, the CEO of OpenAI, said in an interview with ABC News last week that there are some things that he does worry about with the exponential development in artificial intelligence. One of Altman's worries is that OpenAI won't be the only creators of this type of technology and that there will be other people that create these systems will less safety limits than what OpenAI has implemented. Notably, Altman says that "society" has a limited amount of time to figure out how it will react to such technologies being developed and made available.
As outlined perfectly by Futurism, the technology sector is currently experiencing an AI arms race where companies are pouring billions of dollars into research and development to get the biggest slice possible of the new machine-learning pie. ChatGPT is simply the very first AI that has been adopted by millions of people, a mere proof of concept that systems like ChatGPT are in high demand by people across a variety of different industries.
This extreme demand for automation creates an allure for companies to develop these systems with power and profit being at the forefront of their design, rather than safety, regulation, and ethical software development. The combination of creating powerful systems in a short time frame to sell for profit is a hard offer to pass up on, which is why Altman has decided not to release the technical details of OpenAI's GPT-4 language model, the most recent installment by OpenAI.
Notably, OpenAI was originally created to be open-source and technically transparent, but since the release of ChatGPT, the company has done a complete flip, closing off its technical data to the public, citing safety issues with the release of its propriety information and the competitive landscape of the technology.
Essentially, OpenAI has said that it won't be revealing the details of its newest language model (GPT-4) because it will be risking someone replicating its most powerful language model yet, which would cost OpenAI users and, therefore, money. Additionally, the replication of OpenAI's language model would run the risk of a new AI being created that doesn't have the same safety regulations as GPT-4, hence the warnings from Altman in the ABC News interview.
Despite these problems with the development of AI, Altman has called for more regulation within the growing sector. However, the development of AI is clearly outpacing regulators, which is something that typically happens when a new technology emerges.