I always hear the ai companies clamoring for gigawatts of “compute” so they can finally “grow” due to the immense “demand”. But somehow people can just start up clawbot and burn through millions of tokens just fine, I never hear about anyone being denied access to LLM usage. The same for businesses, they’re being sold ai crap left and right and there is never a bottleneck or a queue. In fact, there seems to be plenty of “compute” to go around, far more than needed, really.
Has this ever been pointed out to the ai CEOs? Has this been discussed or explained?


Two factors play into this. 1) This hype has been around long enough to have planned, built, and put online new data centers. So the bottlenecks some providers had in the early days have widened already. And 2) they need capacity for training more than using the models.