New OpenAI Free GPT-OSS Models Outperforms DeepSeek’s R1
OpenAI has launched two new open-weight language models, gpt-oss-120b and gpt-oss-20b, in a shift that marks its most significant open release since GPT-2 in 2019. Both the AI models outperform DeepSeek’s R1 model and are freely available for download on Hugging Face under the permissive Apache 2.0 license.
A single Nvidia 80GB GPU can run the larger gpt-oss-120b, while consumer devices with 16GB of memory can run the lighter gpt-oss-20b.
The move comes at a time of increasing pressure on U.S. AI firms to open source more technology, both to compete with Chinese labs and to align with democratic values. The Trump administration recently called for more open AI models to encourage global adoption based on American principles.
A Strategic Shift from OpenAI
This release marks a notable departure from OpenAI’s closed-model strategy in recent years, which helped it build a profitable business offering proprietary APIs. CEO Sam Altman acknowledged this change in direction earlier this year, saying: “I personally think we have been on the wrong side of history here and need to figure out a different open-source strategy.”
In a statement shared with TechCrunch, Altman added, “We are excited for the world to be building on an open AI stack created in the United States, based on democratic values, available for free to all and for wide benefit.”
Capabilities and Performance of the New AI Models
OpenAI says the new AI models excel in reasoning, structured output, and tool use. They were trained using similar methods as their proprietary O-series models, including reinforcement learning and Mixture of Experts (MoE) architecture, which activates only a fraction of parameters per token.
On benchmarks, gpt-oss-120b scored 2622 on Codeforces with tools, outperforming open competitors like DeepSeek’s R1 but falling short of OpenAI’s own o4-mini. On Humanity’s Last Exam (HLE), gpt-oss-120b and gpt-oss-20b score 19% and 17.3%, respectively, surpassing major open-weight models but underperforming proprietary ones.
However, hallucination remains a concern. OpenAI found that gpt-oss-120b and gpt-oss-20b hallucinated answers in 49% and 53% of questions, respectively, on its PersonQA benchmark, significantly higher than o4-mini’s 36%.
“This is expected, as smaller models have less world knowledge than larger frontier models and tend to hallucinate more,” the company explained.
Limited Scope of the New AI Models of OpenAI
Both models are text-only and cannot generate images or audio. While they can initiate tool use, such as web search or Python execution, they cannot process multimedia input.
OpenAI has not released the training data and defends this decision amid ongoing legal scrutiny over data sourcing. This approach contrasts with fully open-source models from labs like AI2, which publish both weights and training datasets.
Advertisements
OpenAI says it conducted adversarial testing to examine whether the AI models could be fine-tuned for harmful use, including cyberattacks or the development of biological agents. The company concluded that while there may be marginal increases in biological capabilities, the AI models do not reach the internal danger threshold, even after extensive fine-tuning.
Global Competition and What’s Next
The release comes as U.S. AI firms face rising competition from Chinese AI labs like DeepSeek, Moonshot AI, and Qwen, which have made rapid progress with their open-weight models.
OpenAI’s gpt-oss series may now be the most capable openly available reasoning model, but expectations are high for upcoming releases from DeepSeek (R2) and Meta’s Superintelligence Lab.
To encourage broader safety testing, OpenAI has also launched a Red Teaming Challenge with a $500,000 prize pool, inviting researchers to identify new vulnerabilities.
Availability of the New AI Models
The models are available through Hugging Face and deployment partners including Microsoft, AWS, Cloudflare, and Ollama. Microsoft is also offering a GPU-optimised version of gpt-oss-20b for Windows via the AI Toolkit for Visual Studio Code.
OpenAI also released reference implementations in PyTorch and Rust, a new tokeniser (o200k_harmony), and a harmony renderer to help developers integrate the models more easily.
With this release, OpenAI re-enters the open-source AI race, not as a leader by size, but by signalling a broader shift in its approach to transparency, global competition, and developer trust. Whether the move will curb concerns or keep up with rapid developments elsewhere remains unclear.
Also Read:
Support our journalism:
Post Comment