DeepSeek, ChatGPT, Grok … which is your AI assistant? The rise of DeepSeek has sparked widespread global discussion, attracting the attention of U.S. think tanks, and media, all abuzz with debates and forecasts about DeepSeek and the coming wave of technological innovation.
DeepSeek has open-sourced five code repositories this week, with new content being unlocked daily, further sharing their latest developments. This is the latest update from the Chinese startup after its explosive growth last month. The company was founded in July 2023, and in December 2024, DeepSeek-V3 was released, matching the performance of leading international closed-source models. In January 2025, DeepSeek-R1 was released, rivaling the official version of OpenAI-o1 in performance and tasks such as math, coding, and natural language reasoning.
After DeepSeek's surge in popularity, international traffic skyrocketed, even topping the charts in the United States, and reaching the number one in the APP Store download rankings in approximately 140 countries. Leading tech companies such as Microsoft, NVIDIA, and Amazon have successively deployed support for users to access the DeepSeek-R1 model.
The rise of DeepSeek has sparked an extensive global discussion, attracting the attention of U.S. investment banks, think tanks, and media, all abuzz with debates and forecasts about DeepSeek and the coming wave of technological innovation.
In the first issue of Trends column, we've compiled reviews of DeepSeek from international think tanks, media outlets, and technology companies to offer readers diverse perspectives.
What makes Deepseek's AI technology different?
The cost comparison
The company claims to have trained its model for just $6 million using 2,000 Nvidia H800 graphics processing units (GPUs) vs. the $80 million to $100 million cost of GPT-4 and 16,000 H100 GPUs required for Meta's LLaMA 3. (Bain&Company)
The cost of training Gemini (Google) reportedly stood between $30 and $191 million even before taking staff salaries into consideration. (Forbes)
The latest edition of ChatGPT, had a technical creation cost of $41 million to $78 million, according to the source. Sam Altman, CEO of OpenAI, has in the past said that the model has cost more than $100 million, confirming the calculations. (Forbes)
The price difference to be passed on to the consumer
DeepSeek's chatbot offering is free to use on the web.
API access for DeepSeek-RI starts at $0.14 for one million tokens or roughly 750,000 words. DeepSeek's latest model is reportedly closest to OpenAI's o1 model, priced at $7.50 per one million tokens. That's a pretty big disparity in pricing. (Mashable)
Efficiency is the real change
DeepSeek researchers found a way to get more computational power from NVIDIA chips, allowing foundational models to be trained with significantly less computational power. Smaller companies and startups will now be able to replicate low-cost algorithms and potentially innovate upon them, enabling the development of more affordable and accessible low-tier and specialized AI applications across various domains. (Center for Strategic & International Studies)
Is the latest breakthrough of DeepSeek redefining AI race?
Shift toward openness
The "unexpected" rise of DeepSeek could indicate that we may be on a one-way path for AI foundation models. The shift toward openness of these models, which can fuel applications very broadly and create financial value that can further support model improvements, may prove simply inevitable. Just like Linux became the foundation for much of the software we use today, open-source AI foundation models could soon become the standard for generative AI. (The Hill)
It imposes no restrictions
Anyone—from independent researchers to private companies—can fine-tune and deploy the model without permission or licensing agreements. It democratizes AI innovation by giving startups, researchers, and developers' access to cutting-edge AI without licensing fees. It encourages global AI development, allowing independent AI labs to improve the model. And it breaks the monopoly of large AI firms, offering a powerful alternative to proprietary, pay-walled AI models. (Georgia State University News)
Scale alone no longer guarantees AI supremacy
DeepSeek's focus on RL, MoE, and post-training optimization showcases a future where AI compute infrastructure is leaner, faster, and smarter with optimized memory, networking, and computing. Ashu Garg, General Partner at Foundation Capital, predicts that scale alone no longer guarantees AI supremacy. He emphasized that the next wave of AI innovation will be led by startups leveraging large models to design sophisticated systems of agents, which take on complex tasks rather than just automating simple ones. (Forbes)
Waking up the AI world to its disruptive potential.
DeepSeek also ushered in the rise of a new open-source order—a belief that transparency and accessibility drive innovation faster than closed-door research. OpenAI itself has walked back its closed-source strategy in the wake of DeepSeek's accomplishment. (CNBC)
What does this mean for Chinese-U.S. competition?
Scarcity fosters innovation.
As a direct result of U.S. controls on advanced chips, companies in China are creating new AI training approaches that use computing power very efficiently. When, as will inevitably occur, China also develops the ability to produce its own leading-edge advanced computing chips, it will have a powerful combination of both computing capacity and efficient algorithms for AI training. (Brookings)
Technical achievement despite restrictions
The export of the highest-performance AI accelerator and GPU chips from the U.S. is restricted to China. Yet, despite that, DeepSeek has demonstrated that leading-edge AI development is possible without access to the most advanced U.S. technology. (Informa TechTarget)
The export-control measures must be rethought
U.S. strategy of containment with export controls will surely limit the scalability of the AI industry within China. This will likely be a bottleneck, preventing China from scaling its AI service offerings to the globe, under tightening chip sanctions. However, the DeepSeek example showed that export controls cannot kill innovation. Those who are not able to access these chips will innovate their own ways. (Center for Strategic & International Studies)
Does the constrained environment limit China to innovate?
The focus in the American innovation environment on developing AGI and building larger and larger models is not aligned with the needs of most countries around the world. For them, the greatest interest is in seizing the potential of functional AI as quickly as possible. The existing chips and open models can go a long way to achieving that. (Council on Foreign Relations)
The more the United States pushes Chinese developers to build within a highly constrained environment, the more it risks positioning China as the global leader in developing cost-effective, energy-saving approaches to AI. (Council on Foreign Relations)
What can we expect in China?
Faster technology lifecycles
China allowing open sourcing of its most advanced model without fear of losing its advantage signals that Beijing understands the logic of AI competition. Each improvement by one player feeds into the next round of global development—even competitors can iterate on publicly shared advances. (Center for Strategic & International Studies)
Economic forecast
Goldman Sachs pegs a 20-basis-point to 30-basis-point boost to China's GDP over the long term—by 2030—it expects the country's economy to start reflecting the positive impact of AI adoption from next year itself as AI-driven automation improves productivity. (CNBC)
"Six Tigers"
An elite group of companies known as the "Six Tigers"—Stepfun, Zhipu, Minimax, Moonshot, 01.AI, and Baichuan—are generally considered to be at the forefront of China's AI sector. Some, such as Minimax and Moonshot, are giving up on costly foundational model training to hone in on building consumer-facing applications on top of others' models. Others, like Stepfun and Infinigence AI, are doubling down on research, driven in part by US semiconductor restrictions. (MIT Technology Review)
What are the concerns?
Geopolitical concerns
Being based in China, DeepSeek challenges U.S. technological dominance in AI. Tech investor Marc Andreessen called it AI's "Sputnik moment," comparing it to the Soviet Union's space race breakthrough in the 1950s. (Informa TechTarget)
lNational security, intellectual property, and misuse concerns.
Unlike proprietary AI models, DeepSeek's open-source approach allows anyone to modify and deploy it without oversight. This raises fears that bad actors could use it for misinformation campaigns, deepfakes, or AI-driven cyberattacks. The debate isn't just about DeepSeek—it's about how open AI should be. (Georgia State University News)
Whoever leads in AI today may not lead tomorrow
Traditional advantages in access to chips, energy, and finance may not keep firms at the cutting edge. However, no single breakthrough will “win” the competition. Other AI innovations are certain to come, making the AI race much more of a marathon than a sprint. AI firms, whether they be in the United States, China, or elsewhere, are redoubling their efforts to build the next breakthrough and grab the lead. (Council on Foreign Relations)
The views of these comments don't necessarily reflect those of DeepChina.
About DeepChina
DeepChina is an elite academic initiative that offers objective and rational analyses on a broad spectrum of topics related to China, encompassing politics, economics, culture, human rights, diplomacy, and geopolitics.