AI researchers from Standard and the University of Washington claim to have made significant progress in the development of low-cost AI models. Published in a recent research paper, the model entitled 's1' was reportedly built using a small dataset of 1,000 questions and a budget of less than $50.

Stanford's AI Research Lab (Credit: Flickr)
The development was achieved through a process called distillation. Distillation allows smaller models to leverage the capabilities of larger models throughout the training process. In this instance, the s1 model was distilled from Google's Gemini 2.0 - utilizing the 'thinking' process behind each answer from Gemini Flash 2.0 experimental.
Google's terms of service prohibit using Gemini's API to develop models that compete with their AI models, leaving s1 in somewhat of a legal gray area. No official comments have been made in response to the development. The s1 model reportedly rivals the coding and mathematics performance of OpenAI's o1 and DeepSeek's r1, achieving strong performance in benchmark results. While it does not surpass the industry-leading models, it comes surprisingly close considering its budget.
To put things in perspective, s1 won't be shattering markets in the way DeepSeek's r1 did. However, it does have strong implications for AI firms' business models. Ultra-low-cost training proves that models can be developed without billions of dollars of compute power, effectively showing that the 'moat' between smaller players and the giants may be beginning to close.