China’s DeepSeek Steps Up AI Race with R1-0528 Model Update

Paul Jackson

June 9, 2025

Key Points

  • DeepSeek’s R1-0528 upgrade edges closer to OpenAI’s mini models on code generation.
  • New update intensifies China’s competition in the AI space despite US export controls.
  • R1’s January launch had roiled US tech stocks; successor model R2 still anticipated.

Chinese AI startup DeepSeek has quietly rolled out an upgrade to its R1 reasoning model, intensifying the global competition with US heavyweights like OpenAI. The update, dubbed R1-0528, was released on the developer platform Hugging Face without an official announcement or performance comparisons.

Yet, data from the LiveCodeBench leaderboard, crafted by UC Berkeley, MIT, and Cornell researchers, shows that R1-0528 nearly matches OpenAI’s o4 mini and o3 reasoning models in code generation, even outpacing xAI’s Grok 3 mini and Alibaba’s Qwen 3.

DeepSeek’s R1 model originally debuted in January, swiftly climbing to the top of Apple’s App Store charts and shaking the notion that China’s AI push was being stymied by US export controls. The launch of R1 not only jolted global tech stocks but also shifted perceptions about the need for massive computational resources in AI development.

Since then, Chinese giants like Alibaba and Tencent have unveiled models vying to surpass DeepSeek’s offering. Meanwhile, global rivals are recalibrating their pricing strategies—Google’s Gemini has introduced discounted access, and OpenAI has cut prices on its o3 mini model to compete.

Industry watchers remain focused on the expected debut of DeepSeek’s R2 model, originally tipped for a May release. In March, DeepSeek also delivered an upgrade to its V3 large language model, signaling an ongoing ambition to stay at the forefront of AI innovation.

Chinese AI startup DeepSeek has quietly rolled out an upgrade to its R1 reasoning model, intensifying the global competition with US heavyweights like OpenAI. The update, dubbed R1-0528, was released on the developer platform Hugging Face without an official announcement or performance comparisons.

Yet, data from the LiveCodeBench leaderboard, crafted by UC Berkeley, MIT, and Cornell researchers, shows that R1-0528 nearly matches OpenAI’s o4 mini and o3 reasoning models in code generation, even outpacing xAI’s Grok 3 mini and Alibaba’s Qwen 3.

DeepSeek’s R1 model originally debuted in January, swiftly climbing to the top of Apple’s App Store charts and shaking the notion that China’s AI push was being stymied by US export controls. The launch of R1 not only jolted global tech stocks but also shifted perceptions about the need for massive computational resources in AI development.

Since then, Chinese giants like Alibaba and Tencent have unveiled models vying to surpass DeepSeek’s offering. Meanwhile, global rivals are recalibrating their pricing strategies—Google’s Gemini has introduced discounted access, and OpenAI has cut prices on its o3 mini model to compete.

Industry watchers remain focused on the expected debut of DeepSeek’s R2 model, originally tipped for a May release. In March, DeepSeek also delivered an upgrade to its V3 large language model, signaling an ongoing ambition to stay at the forefront of AI innovation.

Author

Paul Jackson

RELATED ARTICLES

Subscribe