Open Weights Kill the Moat
American capital financed AI on the assumption it would be the next great monopoly. Open-weight models are commoditizing the capability that monopoly was supposed to protect. The collision between the two now defines the direction of the U.S. AI industry — and the country.
Full article excerpt tap to expand
Essay · AIThe Moat or the CommonsAmerican capital financed AI on the assumption it would be the next great monopoly. Open-weight models are commoditizing the capability that monopoly was supposed to protect. The collision between the two now defines the direction of the U.S. AI industry — and the country.By Shaun Warman·Monday, April 27, 2026·10 min readTL;DR — TakeawaysU.S. frontier labs trade at valuations that assume monopoly-grade rents in the post-apprenticeship phase. The financial structure cannot survive a commodity outcome.Open-weight models — DeepSeek, Qwen, Kimi, GLM — running on the LangChain, vLLM, llama.cpp, and Ollama stack are commoditizing capability faster than the closed labs can deepen the moat.When technology cannot manufacture scarcity, American capital reaches for regulatory enclosure, vertical integration, and bundled distribution to manufacture it instead. This is what U.S. capitalism does in this situation. It is doing it now.Three predictions for the U.S. direction: security-dressed regulatory enclosure of Chinese open weights, frontier labs absorbing their own customers as operators, and a split market where domestic users pay closed-lab pricing while the world routes around U.S. rails.The defensive move is also the offensive one. Build on the commons, run open weights now while the regulatory air is clean, and architect for jurisdictional flexibility before the migration becomes involuntary.American AI was financed on a particular bet. The bet was that frontier models would be the next great monopoly business — winner-take-all, capex-justified-by-monopoly, the kind of structurally protected market that supports trillion-dollar valuations and the capital flows necessary to build them. Two and a half years into the cycle, the assumption is breaking. Not slowly. Not at the edges. Visibly, in the public benchmarks, the open-source repos, the Hugging Face download counts, and the inference price sheets.The break is straightforward to describe. Open-weight models — most of them released by Chinese labs, served through a stack of mostly Western open-source infrastructure — are commoditizing the capability that the moat was supposed to protect. Capability that a U.S. closed lab could charge enterprise rates for in 2024 is now available, downloadable, deployable on rented hardware, at single-digit cents on the dollar in 2026. The gap between the open frontier and the closed frontier is six to twelve months. It is closing, not widening.The collision between those two facts — that American capital paid for a moat, and that the technology no longer provides one — is the most important force in the AI industry today. Everything else, including the policy direction the U.S. government will take in the next eighteen months, is downstream of how that collision resolves.The Capital ThesisTo understand what is at stake, follow the money. U.S. frontier labs and their hyperscaler partners have committed somewhere on the order of a trillion dollars to AI capex over the next four years — data centers, GPU clusters, power infrastructure, fiber, the entire physical stack that frontier inference requires. Those commitments are not made on the assumption of SaaS-grade margins. SaaS-grade margins do not service that kind of capital base. The commitments were made on the assumption that frontier capability would behave, at scale, like a regulated monopoly: high fixed costs, high marginal margins, durable rents, very few…
This excerpt is published under fair use for community discussion. Read the full article at Warman.