Is it on? How top AI makers responded DeepSeek’s success
If you’re in IT or an adjacent industry, you’ve probably heard “DeepSeek” too often in the last couple of weeks. The Chinese artificial intelligence from a namesake startup has sent ripples through the fabric of the industry, and triggered some events that, otherwise, would have probably happened much later in the year. So, is the grand AI race really on? Let’s see how major companies advancing the field reacted to DeepSeek skyrocketing to the top.
OpenAI’s move: o3-mini
We’ll likely never know if OpenAI actually intended to release its o3-mini model on January 31, 2025, or not, nor if making it available to everyone, including free users, was a plan from the outset. It is what it is, though; when it became painfully clear that the success of DeepSeek isn’t some space-time aberration, one of the most valuable startups in history gave the world its next brainchild.
Quick facts about OpenAI’s o3-mini:
- it is 24% faster than o1-mini, excelling in math, coding, and science;
- the model boasts significant reasoning capabilities;
- according to Sam Altman, OpenAI’s CEO, there is a larger successor LLM in the works.
o3-mini is available both on ChatGPT's website and in the respective AI app. To use the model, click the “Reason” toggle in the query interface on the site, or change the model serving you in the mobile application (relevant as of this writing; subject to change at developer’s discretion). Free users get to ask this model 10 times a day.
Microsoft’s reaction: Think Deeper and more
Microsoft has been actively integrating Copilot into its ecosystem, making it a part of the Office suite (now Microsoft 365 Copilot) and never missing a chance to advertise the AI assistant. The move that followed recognition of DeepSeek’s success, though, was unexpected: on February 1, 2025, the tech giant announced adding OpenAI’s premium o1 model to Copilot, and making it available for free. It matters, because the regular price for this ChatGPT Pro feature is $200/month.
The next move in the same field was admitting DeepSeek R1 to Azure AI Foundry, a rather controversial decision given the accusations from OpenAI about the Chinese company misusing its data.
NVIDIA’s recent acquisition
DeepSeek, unwillingly, pushed NVIDIA’s stock down so hard that the market capitalization of the company dropped by nearly $600 billion in a single day, marking a first in stock market history. The situation is improving gradually – the rules of trading still stand, at least – but the chip maker seemed to rush the deal with Run:ai, an artificial intelligence orchestration platform, and closed the acquisition thereof on January 29, 2025, spending $560 million.
More LLMs from China
In the last 10 days of January, 2025, two other major developers from China, Alibaba and ByteDance, released the next iterations of their AIs, Qwen 2.5-Max and Doubao 1.5-Pro, respectively. Never mind the Chinese interface of the latter, it speaks English better than most of us.
Alibaba claims that its Qwen 2.5-Max outperforms several leading LLMs, including DeepSeek V3 and OpenAI's GPT-4o; it is a product made with a focus on enhancing performance in benchmarks that evaluate reasoning and comprehension. The AI is designed for a variety of applications, including code generation, natural language processing, and general AI tasks.
ByteDance is more specific with its claims: according to the company, the latest Doubao is better than OpenAI's o1 in the AIME benchmark, which shows how well the model understands and responds to complex instructions. Doubao 1.5-Pro is integrated with the developer’s ecosystem, particularly TikTok, and is positioned as a content generation facilitator. The key focus for the model is processing Chinese language tasks, though.
As we have noticed in the previous post, for us, the regular users, this sort of competition means much easier access to high-end capabilities of AI systems. Moreover, DeepSeek has shown that in this field, open source is a viable approach, which means the variety of LLMs will likely increase, and their bias will naturally drop as more and more contributors join the training community.