Researchers that built a ChatGPT clone for $600 killed it over safety concerns

Stanford researchers decided to shut down their demo of ChatGPT that they built for $600 over safety concerns and the practical running cost.

Published
Updated
2 minutes & 56 seconds read time

It was only a few days ago that a team of Stanford researchers built a clone of OpenAI's ChatGPT for just $600. Now those researchers have taken the demo.

Researchers that built a ChatGPT clone for $600 killed it over safety concerns 58645

The release of OpenAI ChatGPT put artificial intelligence in the limelight and demonstrated the widespread demand for language models, the underlying technology powering these AI tools. Following the release of ChatGPT, which quickly adopted millions of users, other companies such as Google, Microsoft, Facebook, and Amazon began dropping information about their own language models that are currently in development. Microsoft quickly hopped on the OpenAI train by investing billions of dollars into the company in return for its propriety GPT language model.

Researchers at Stanford decided to see how difficult and costly it would be to create their own language model and decided to try and replicate OpenAI's GPT. As previously reported, the Stanford professors took Meta's open-source LLaMA 7B language model and trained it on trillions of tokens of data. The results were the almost-creation of a ChatGPT clone named Alpaca, which came with some key differences. While Alpaca was trained on a large amount of data, it wasn't trained on how to sift through that data to acquire an answer quickly. This led researchers to conclude that Alpaca was much slower than GPT.

"The original goal of releasing a demo was to disseminate our research in an accessible way We feel that we have mostly achieved this goal, and given the hosting costs and the inadequacies of our content filters, we decided to bring down the demo," said a spokesperson representing Stanford University's Human-Centered Artificial Intelligence institute, to The Register

Speed was just one of the problems with Alpaca, as the researchers noted the bootleg language model would commonly spew misinformation, getting simple questions wrong such as the capital of Tanzania, or arguing that the number 42 is the best seed for AI training. Developers call these errors "hallucinations," and Alpaca was rife with them, which added to the Stanford researchers' decision to pull down the demo.

"Hallucination in particular seems to be a common failure mode for Alpaca, even compared to text-davinci-003," the researchers noted.

According to a report from The Register, the original goal outlined by the researchers was to determine how difficult and costly it is to create a powerful language model. The researchers believe they have achieved this goal by demonstrating that powerful AI-powered tools can be created for a very limited amount of money, and while they do have several issues, the demonstration rings true nonetheless.

The Register reports that a combination of safety concerns regarding the misinformation issue with Alpaca, achieving the initial goal set out, and the costs involved in hosting the AI resulted in the decision to remove it.

However, for those that still want to build upon Alpaca, its code has been made available for download on GitHub. As the researchers state, the goal of creating a low-budget language model was achieved, demonstrating the power that can be created with a budget of less than $1,000.

"Alpaca likely contains many other limitations associated with both the underlying language model and the instruction tuning data. However, we believe that the artifact will still be useful to the community, as it provides a relatively lightweight model that serves as a basis to study important deficiencies," the researchers said

Buy at Amazon

CORIRESHA Mens Apollo

TodayYesterday7 days ago30 days ago
$36.99$36.99$37.99
* Prices last scanned on 3/18/2024 at 2:20 pm CDT - prices may not be accurate, click links above for the latest price. We may earn an affiliate commission.
NEWS SOURCE:theregister.com

Jak joined the TweakTown team in 2017 and has since reviewed 100s of new tech products and kept us informed daily on the latest science, space, and artificial intelligence news. Jak's love for science, space, and technology, and, more specifically, PC gaming, began at 10 years old. It was the day his dad showed him how to play Age of Empires on an old Compaq PC. Ever since that day, Jak fell in love with games and the progression of the technology industry in all its forms. Instead of typical FPS, Jak holds a very special spot in his heart for RTS games.

Newsletter Subscription

Related Tags