Rahul Pawa
Next generation of AI breakthroughs will not necessarily emerge from trillion-dollar companies with exclusive access to compute power. Instead, they will come from those who can best refine, iterate and optimise open-source research. This is why the reaction in the U.S. is so intense.
DeepSeek, particularly its “R1” model, sent a subtle tremor through the U.S. tech elite—one that quickly escalated into a storm. The Chinese-developed AI’s remarkable efficiency rattled Silicon Valley, with analysts bracing for yet another chapter in the intensifying technological arms race between the two tech powers. However, the real story isn’t China outpacing the U.S. in AI—this understanding is flawed. The true takeaway lies not in AI Supremacy but in rising influence of open research and open-source development, which are rapidly outpacing proprietary AI models.

For years, the prevailing wisdom in Silicon Valley was that AI supremacy was dictated by scale and infrastructure. OpenAI’s ChatGPT, built on the Generative Pre-trained Transformer (GPT) model, was the epitome of this approach. It was trained on vast datasets, fine-tuned with Reinforcement Learning from Human Feedback (RLHF), and required enormous computational resources to function. ChatGPT’s dense Transformer architecture meant that every interaction activated all of its parameters, demanding substantial compute power, data centers, and high-end GPUs to sustain its performance.This resource-heavy approach made large-scale AI development an exclusive domain of tech giants. The assumption had long been that only organisations with billion-dollar budgets and state-of-the-art infrastructure could build competitive AI models. The underlying premise was that the future of AI would be controlled by those who could invest the most in proprietary architectures and data ecosystems.
Then came DeepSeek, and with it, an unexpected disruption to this model. Unlike ChatGPT’s dense Transformer framework, DeepSeek employs a Mixture-of-Experts (MoE) architecture, an approach that activates only the most relevant subnetworks for each task. Instead of engaging all model parameters indiscriminately, DeepSeek selects only a subset of specialised expert networks, dramatically reducing computational costs while maintaining high performance. This efficiency allows DeepSeek to deliver state-of-the-art AI capabilities without the infrastructure overhead required by proprietary models like ChatGPT. But the real disruption is not just in DeepSeek’s technical design—it is in how it was built.
DeepSeek did not emerge from secrecy. It was not necessarily developed in a classified government lab or through covert data acquisition. Instead, it was built using open-source tools and publicly available research—much of it originating from the West. The foundation of DeepSeek’s success is not Chinese innovation but the power of open-source AI research. It leveraged, Meta’s Llama, an open-source large language model whose architecture was freely available. PyTorch, a deep-learning framework widely used in AI development, originally developed by U.S. researchers. Mixture-of-Experts research, openly published in Western academic AI circles. China did not need to steal AI advancements—it simply used what was freely available, improved upon it, and released an optimised version. This is the real source of Silicon Valley’s unease. If DeepSeek can leverage open-source AI research, refine it, and deploy a highly efficient model at scale, then the playing field is no longer dictated by who has the largest dataset or the most compute power. The realisation that AI’s future is no longer monopolised by a few Western corporations is what is truly unsettling the industry. DeepSeek is simply the first proof of a larger trend: open-source AI models are overtaking proprietary ones in agility, accessibility, and efficiency. The AI race is no longer about who has the most resources, but who can most effectively iterate and optimise open research. The implications extend beyond economics and corporate dominance—they reach into geopolitics and national security.
For years, the U.S. has sought to contain China’s AI ambitions through restrictions on high-performance semiconductor exports. The Biden administration imposed strict controls on the sale of Nvidia’s A100 and H100 AI chips to China, believing that limiting access to cutting-edge hardware would slow its AI development. But DeepSeek challenges that assumption. If AI can be built more efficiently, then hardware limitations become less of a bottleneck. DeepSeek suggests that AI models do not necessarily need massive compute power to be competitive—they need smarter, more efficient architectures. If this is true, then U.S. export restrictions may not be as effective as previously believed. Yet, to frame this purely as a U.S.-China competition misses the broader transformation taking place.
DeepSeek is not a singular national achievement—it is evidence of a fundamental shift in AI development. The next generation of AI breakthroughs will not necessarily emerge from trillion-dollar companies with exclusive access to compute power. Instead, they will come from those who can best refine, iterate, and optimise open-source research. This is why the reaction in the U.S. is so intense. The fear is not that China alone has built a better model, but that the monopoly on AI development itself is weakening. If DeepSeek’s efficiency proves sustainable, then the assumption that AI innovation belongs only to the wealthiest institutions is no longer valid.
DeepSeek doesn’t signal the U.S. losing its AI edge—it marks the broader shift that AI development is decentralising. The future of artificial intelligence will not be dictated by who has the largest corporate lab or the deepest computing resources, but by who is willing to embrace collaboration, efficiency, and open knowledge. Silicon Valley’s response to DeepSeek is telling. It is not about a loss of American technological superiority but about a loss of control over the AI narrative. The shift toward open-source AI threatens the dominance of proprietary models, and that is what has set off the alarm bells in Washington and within the ranks of major AI corporations.
Lastly, DeepSeek is not an endpoint it is a harbinger of what is to come. The AI revolution is moving faster than anticipated, not because of geopolitical competition, but because of the power of shared knowledge. The next phase of AI will be defined not by who builds the biggest model, but by who can most effectively harness and refine what is already available to everyone. The U.S. outrage over DeepSeek is not about China’s AI dominance—it is about the erosion of proprietary AI control. The real takeaway is not geopolitical AI supremacy but rather the accelerating power of open research and open-source development over closed, corporate-owned models.
(Author is Research Director at Centre for Integrated and Holistic Studies, New Delhi based non-partisan think-tank)