Where Is the LLM Moat?
Not every AI use case warrants the latest and greatest frontier LLM. Open model performance is high, cost is low. The AI labs are distilling each others’ models. Therefore, where is the LLM moat and what does this mean for the long-term market caps of the LLM builders, hyper-scalers and chip manufacturers?
The latest versions of DeepSeek, Qwen and other Chinese open models are quite good. Google’s Gemma 4 models are quite good. The U.S. AI labs are distilling each others’ models. I can’t really tell the difference between Opus 4.6 and Codex 5.5 (Opus 4.7 feels a bit goofy). Therefore, is there really a defensible LLM moat? I don’t see how there is a moat so long as AI labs are distilling each others’ models. Will U.S. companies not want to use open Chinese models if they cost 10% of what it costs to run Anthropic and OpenAI?
I don’t know how correct the survey numbers below are, but they feel approximately correct, which is to say that approximately 60% of companies have used open models, but that open models only account for approximately 10% of API usage.
Open-weight and open-source LLMs appear underrepresented in enterprise production usage relative to their technical quality. Survey data suggests many companies use open models somewhere in their stack, but production LLM traffic and spend remain concentrated among closed-model vendors such as OpenAI, Anthropic, and Google. Menlo Ventures estimates that open-source/open-weight LLMs represent only 11% of enterprise LLM API usage, down from 19% the prior year.
That 11% figure should not be interpreted as “only 11% of companies use open models.” McKinsey, Mozilla, and the Patrick J. McGovern Foundation found that 63% of surveyed organizations use an open-source AI model, rising to 72% in technology companies. The gap suggests that many enterprises experiment with or selectively deploy open models, while their highest-volume production usage still flows through proprietary providers.
The likely constraint is not model quality alone. Several Chinese open-weight models now rank among the strongest open models. Public benchmarks and usage studies show substantial real-world adoption of DeepSeek, Qwen, Kimi, GLM, and other Chinese open models outside traditional enterprise procurement channels.
Enterprise adoption appears constrained by trust, governance, and procurement friction. Buyers favor vendors that provide managed APIs, security documentation, SLAs, compliance posture, indemnity, data controls, audit logs, support, and integration into existing platforms. McKinsey’s survey found that the leading barriers to open-source AI adoption include security and compliance, as well as uncertainty around long-term support. These concerns are very similar to open source software.
Geopolitical risk likely suppresses adoption of Chinese models specifically. Menlo Ventures reports that Chinese open-source models account for only about 1% of total enterprise LLM API usage, despite strong technical progress and startup/developer adoption. This suggests that legal and security teams may be treating model origin as a procurement risk, even when the weights can be self-hosted inside a company’s own cloud or VPC.
The market opportunity is clear: a vendor that makes high-performing open-weight models enterprise-grade. The strongest wedge is not simply cheaper hosting, but a security and governance layer around open models: private deployment, artifact scanning, model approval workflows, RBAC, SSO, audit logs, policy enforcement, eval reports, data-loss controls, monitoring, and compliance documentation. The strategic opening is especially strong for regulated enterprises that want the cost, control, and customization advantages of open models but cannot absorb the governance burden themselves. A credible vendor could separate model-origin risk from deployment risk and give enterprises a trusted path to use frontier-class open-weight models, including Chinese models where appropriate.



