Capacity-Constrained LLM Builders
With Anthropic’s Mythos model and OpenAI’s GPT 5.5, we could be looking at the new normal for LLM releases by capacity-constrained LLM builders.
In the case of Mythos, it was released weeks ago to a preferred group of customers.
In the case of GPT 5.5 it is being released in stages through the summer, and its cyber “module” (which carries the 5.4 label), is initially limited in scope via its TAC circle of trust program, which reads like Mythos’ marketing, which is to say it reads like nonsense.
My sense is that given the capacity constraints of Anthropic in particular and OpenAI, these new GPU-hungry models are too expensive for the masses and therefore are being rolled out in a phased approach so that model costs may come down as GPU capacity is on-boarded.
To that end, if it is true that xAI has only utilized 11% of its NVIDIA chip cluster - granted it loaned some to Cursor and is in-process of acquiring Cursor - why not rent GPU capacity to Anthropic? Perhaps xAI feels that Anthropic is too much of a direct LLM competitor and would rather see Anthropic starve for capital and GPUs.
It is not certain that xAI will ever develop the best coding model, therefore I believe that xAI should jump into the chip rental market and compete with the likes of CoreWeave (CRWV) and others.
I am looking forward to trying Anthropic’s Mythos for coding. Opus 4.7 was definitely a step back from Opus 4.6 on the coding front. I’m still using Opus 4.6 and OpenAI’s Codex 5.5.
I want to use Mythos before I call bogus to Boris Cherny’s (creator of Claude Code), claim that coding is solved. It isn’t.
Meanwhile, Google is the one LLM company that isn’t starved for capital nor chips. It can slowly chip away at improving Gemini’s coding and agentic capabilities as its purpose-built TPU chips continue to improve.



