DeepSeek R1: Open-Source AI Reasoning Model That Beats OpenAI’s o1
DeepSeekreleased its V3 modellast month. The company has nowunveiled its reasoning model, DeepSeek R1.DeepSeek claims it not only matches OpenAI’s o1 model but also outperforms it, particularly in math-related questions. The good thing is that an R1 model is open-source, free to use, and can even run locally. Let’s explore if R1 is really that good.
Table of Contents
What is DeepSeek R1?
DeepSeek R1 is a reasoning model, meaning it doesn’t simply provide the first answer it finds. Instead, it “thinks” through problems step by step, taking seconds or even minutes to reach a solution. This deliberate chain-of-thought process makes it far more accurate than traditional AI models and particularly useful in areas like math, physics, and coding, where reasoning is crucial.
DeepSeek achieves this reasoning capability through a combination ofReinforcement Learning (RL)andSupervised Fine-Tuning (SFT). What? Here’s what these two terms mean:

Initially, DeepSeek relied solely on Reinforcement Learning without fine-tuning. This “DeepSeek R1 Zero” phase demonstrated impressive reasoning abilities, including self-verification, reflection, and generating long chains of thought. However, it faced challenges such as poor readability, repetition, and language mixing. To address these issues, DeepSeek combined RL with Supervised Fine-Tuning. This dual approach enables the model to refine its reasoning, learn from past mistakes, and deliver consistently better results. More importantly, this is an open-source model under theMIT License.
The Numbers Behind DeepSeek R1
DeepSeek R1 boasts a massive671 billion parameters. Think of parameters as the brain cells an AI uses to learn from its training data. The more parameters a model has, the more detailed and nuanced its understanding. To put this into perspective, while OpenAI hasn’t disclosed the parameters for o1, experts estimate it at around200 billion, making R1 significantly larger and potentially more powerful.
Despite its size, R1 only activates37 billion parameters per tokenduring processing. DeepSeek says it is done to ensure the model remains efficient without compromising reasoning capabilities.

The R1 model is built with the DeepSeek V3 model as its base, so the architecture and other stats are mostly similar. Here are the DeepSeek R1 model stats:
How Does R1 Compare to OpenAI’s o1?
When it comes to benchmarks, DeepSeek R1 is on par with OpenAI’s o1 model and even slightly surpasses it in areas like math. On math benchmarks like AIME, it scored 79.8%, slightly better than o1’s 79.2%. For programming tasks on Codeforces, it outperformed 96.3% of human programmers, showing it’s a serious contender. However, it’s slightly behind o1 in coding benchmarks.
For developers, the model is cheaper to integrate into apps. While the o1 model costs $15 per million input tokens and $60 per million output tokens, R1 costs just $0.14 per million input tokens (Cache Hit), $0.55 for million input tokens (Cache Miss) and $2.19 for output tokens, making it 90%-95% cheaper.

Another standout feature of R1 is that it shows itsentire thought processduring reasoning, unlike o1, which is often vague about how it arrives at solutions.
Distilled Versions for Local Use
DeepSeek has also releaseddistilled modelsranging from1.5 billion to 70 billion parameters. These smaller models retain much of R1’s reasoning power but are lightweight enough to run even on a laptop.
Distilled Models:
These smaller models make it easy to test advanced AI capabilities locally without needing expensive servers. For example, 1.5B and 7B models can run on laptops. Whereas, 32B and 70B models deliver near R1-level performance but require more powerful setups. Even better, some of these models outperform OpenAI’s o1-mini on benchmarks.
Also Read:
How to Access DeepSeek R1
DeepSeek R1 is easy to access. Visitchat.deepseek.comand enableDeepThinkmode to interact with the full 671-billion-parameter model.
Alternatively, you can access the Zero model or any distilled versions via theHugging Face app, where you can download lightweight models to run locally on your computer.

Why DeepSeek R1 Matters
Outside of Microsoft’s Phi 4 model, there isn’t another open-source reasoning model available. Phi 4, however, has only 14 billion parameters and cannot compete with OpenAI’s o1 closed models. DeepSeek R1 provides a free, open-source alternative that rivals closed-source options like o1 and Gemini 2.0 Flash Thinking. For developers, the cost-effectiveness and open accessibility of R1 makes it especially appealing.
The only downside is that, as a Chinese-developed model, DeepSeek must comply with Chinese government regulations. This means it won’t respond to sensitive topics like Tiananmen Square or Taiwan’s independence, as the Cyberspace Administration of China (CAC) ensures that all responses align with “core socialist values.”

Ravi Teja KNTS
Tech writer with over 4 years of experience at TechWiser, where he has authored more than 700 articles on AI, Google apps, Chrome OS, Discord, and Android. His journey started with a passion for discussing technology and helping others in online forums, which naturally grew into a career in tech journalism. Ravi’s writing focuses on simplifying technology, making it accessible and jargon-free for readers. When he’s not breaking down the latest tech, he’s often immersed in a classic film – a true cinephile at heart.