American Closed Source vs Chinese Open Source: A False Dichotomy
It’s a call to patriotism. China versus America. “Who will you back?” This has become a common plea from the Silicon Valley elite over the last six months. I heard the move up close at the Harvard Kennedy School, where a visiting Eric Schmidt warned that AI may soon cross into autonomous self-improvement, argued that someone will need to “raise their hand” and impose limits, and then pivoted into the geopolitical register, contrasting American and Chinese trajectories and urging policy and funding choices aligned with “American values.” Others have also made versions of this argument in different forums. Tarun Chhabra, head of national security policy at Anthropic, has made a similar argument, urging an “American stack” and treating model governance as a geopolitical contest. Putting aside the awkwardness of nationalist messaging coming from the Bay Area’s long-time borderless “global citizens,” the incentives are not hard to see. If you can frame the open vs closed models debate as a national security referendum, you can frame restrictive rules as patriotism and you can frame “responsible control” as synonymous with dominance by a small circle of incumbent providers.
The posture makes sense once you consider two facts. One: industries which may live and die on capricious regulatory rule making must make their case to those with their hands on the levers of power. In 2026 America, those hands are professed patriotic Republicans. Two: Big Frontier LLM is losing the tech battle, or at least losing the easy assumption that America’s lead is automatic and permanent. They are on their back foot so they must frame the open vs closed model debate wrongfully as a fight between America and China. “America cannot afford to lose a battle to China and by extension Anthropic, OpenAI and Alphabet cannot afford to lose to their competition.”
Yet there is nothing inherently Chinese about open models and nothing inherently American about closed models. If anything, it should be the opposite. Open models are decentralized, inspectable, forkable, and difficult to monopolize. That aligns with an American instinct to diffuse power, prefer competition over permission, and distrust single points of control. Closed models concentrate capability behind a small number of gatekeepers, wrapped in secrecy, and sustained by privileged access to regulators. That logic is far closer to centralized control than to open competition. The real fault line is not America versus China. It is democratic diffusion versus unnatural scarcity, and good tech versus bad tech.
Regulatory Capture?
Safety arguments against open models can be institutionally self-serving. The likely political economy of strict open model controls is that compliance becomes a fixed cost that only large incumbents can bear. New technology regulation can be shaped by the most powerful actors to protect or expand their advantage. The concern for this paper is not that every safety proposal is capture, but that a legal regime designed around the idea that “only a few trusted providers may exist” is structurally aligned with incumbent interests.
Valid Safety Concern
That said, I wouldn’t pretend the case for safety is purely cynical. A useful example is widely accessible virology. The life sciences have lived for decades with the uncomfortable fact that biological knowledge is often dual use: the same methods that teach you how pathogens spread, mutate, or evade immune response can also lower barriers for malicious replication or reckless experimentation. In domains where the object of study is intrinsically hazardous, knowledge dispersion can be dangerous. However, we should also consider that much of what is dangerous is already public and incorporated into existing LLMs. Trying to retroactively censor it would be impossible.
The more practical policy is forward-looking. We can prevent additional damage by moving classification upstream: treating certain classes of data as hazard-enabling at scale, even if they have been, until now, classified as “public.”
Why are closed models bad tech?
Distillation is the mechanism by which “closed” models leak or are stolen by competitors. In distillation, a smaller student model is trained to imitate a larger teacher by querying it at scale and learning from its outputs. In practice, that means once you ship a frontier model behind an API, you have created a surface that others can use, legally or not, to train imitators; released systems are already being distilled against, and the industry has begun openly fighting over it. The “closed” advantage, then, is not a durable moat so much as a temporary lead, especially because open models are now only about three months behind the state of the art on average, and the gap is shrinking.
That is why closed model maximalism is bad tech: it asks the public to bankroll a capability edge that can often be copied down the stack. Distillation is a one-way process. Once a a model’s weights exist in the wild, the knowledge it was trained on will too. The result is that a great deal of public money invested in frontier systems may buy us, at best, half a generation of advantage before that advantage becomes baseline.
The AGI Finish Line
This massive public investment can be justified if we are, in fact, in a race to a specific finish line. Perhaps we can call that finish line AGI and the first firm to build it will reap the infinite benefits of having built GodGPT. Sam Altman and his peers would like you to believe there is a decisive summit and a single winner. I will not make the argument here against our ability to build GodGPT. I will merely ask the reader: if we cannot build it, what exactly has our public investment bought us? How much of our policy architecture depends on GodGPT being real and imminent? And do the frontier model builders have an interest in keeping us convinced that it is?
Where will economic value come from?
Assuming we do not build God(gpt), I see the value added from AI falling into two broad operational modes. First, LLMs, call them agents if you want, will replace or enhance existing labor and raise productivity inside the work we already do. Second, frontier LLMs will be used by humans to create new technology and, through new tech, raise productivity. The open versus closed question looks very different in each mode.
Start with labor enhancement. Here, I struggle to see where a closed model creates durable value that an open model cannot. Again, the leading open models are only about three months behind the best closed models on average, and that gap has been narrowing over time. More importantly, they will never be less competent than they are today. LLM capability, as it stands today, is already powerful enough to transform knowledge work once the workforce is educated on how to leverage it effectively. So beyond branding, default status, and the inertia of being “what everyone uses,” it is hard to imagine how a three-month lead translates into meaningful economy-wide productivity returns compared to open competitors.
There is more reason for optimism, and more reason to take the frontier seriously, when it comes to creating new tech. Even without accepting a God-like AGI, we have already seen systems that look like they are generating new knowledge, or at least synthesizing massive amounts of dispersed knowledge in ways humans cannot efficiently. The frontier labs will have an advantage here because the marginal discovery can be expensive, and the best-resourced models can search more of the space, faster. But we should also be careful with the race metaphor. The knowledge search space is not a straight track with a single finish line. Think of it more as a multi-dimensional blob with advances in all directions from our current knowledge blob. That changes the economics. Progress is not one AI outrunning another. It is billions of human and AI teams pushing outward in parallel. If a cheaper but slightly slower AI can be put in the hands of a billion knowledge seekers, it may create more new knowledge than a $200-a-month model in the hands of only one million. And if LLM capability progress has diminishing returns, any frontier lead that depends on scale alone becomes harder to defend over time.
Then there is the profit model question. Frontier firms will face pressure to turn a profit to repay the massive investment they have taken. They can do this through enterprise subscriptions, usage-based pricing, and through business-facing products that make organizations more productive. This is what Anthropic claims is their plan. The business value here comes from the delta between the knowledge frontier firms are capable of generating and the knowledge open models can generate. It is possible that the delta will be significant. It also may not be (and shrinking with time). The least we can say for now is that it is uncertain.
They can also profit through advertising. The largest pool of users is the pool that does not want to pay very much. OpenAI has now publicly moved toward testing ads in ChatGPT for some users. Big Frontier Model seems reluctant to talk about this trajectory because it sounds like an admission that they are just like every other tech platform, and should be valued as such. It also puts them in direct competition with existing ad giants. For the last couple of years, what we perceived as a performance advantage of LLMs over traditional Google search may often have partly been a mirage - a feeling of refreshment at having escaped advertisement-suffused search for the first time in decades. The ad model has another structural problem. There is no durable barrier against exit toward free, or significantly cheaper, open models once they are “good enough,” which they increasingly are. If the closed-model future is subscriptions plus advertising plus lock-in, then the public is effectively subsidizing the creation of a new, enshittified ad service with a thinning claim to unique value, and with a user base that can walk away the moment the open alternatives cross the usability threshold.
The Difficulties of Moratorium Enforcement
The policy debate often assumes a stable choice between “closed models under responsible control” and “open models in the wild.” In practice, a model can start closed, then leak, then become open in effect. The LLaMA 1 leak is one example. Meta did not initially release the model as a general public artifact. It was meant for a controlled research release, and the weights leaked online within days, spreading through channels like 4chan and torrents. These models are big, but not that big in practical terms. They don’t require a data center for storage. The significant IP is measured in the hundreds of gigabytes and small enough to copy, mirror, and pass around through ordinary internet infrastructure. Physically speaking, this data could be carried in someone’s pocket. Effectively preventing use of an open LLM would require inspectability of all digital media, moratoria on encryption, and unprecedented visibility into network traffic.
The moratorium problem becomes even less credible once it relies on international cooperation. The Bletchley Declaration is a useful illustration: it recognizes that many AI risks are international and calls for cooperation, but it is fundamentally a political declaration rather than an enforceable regime. The cooperation required for Bletchley is extremely mild compared to what would be needed to enforce a moratorium against what is effectively a small amount of data / software. The plausible outcome is uneven restriction: some jurisdictions ban open releases, others become havens, and diffusion continues anyway.
Conclusion
The open versus closed fight is not America versus China, even if that framing is politically convenient. It is convenient precisely because it converts a messy argument about market structure, democratic control, and technological diffusion into a simple loyalty test. American Big Frontier Model have a vested interest in that narrative. If you can convince lawmakers that “closed” is patriotic, you can turn regulation into a moat and public money into a subsidy. The risk is that we keep throwing good money after bad, paying repeatedly for a thin, temporary lead while the underlying capabilities diffuse anyway. Models do not unlearn and capability inevitably spreads. The wiser posture is to stop moralizing the architecture and lean into open models and the model agnostic tech we can build on top of it.


Really sharp take on how the nationalist framing obscures the real debate here. The distillation point is particualrly underrated, once capability escapes through APIs, the "closed" moat is basically just a 90-day headstart that costs billions to maintain. Been seeing this play out with teams spinning up competitive models at afraction of the cost.