The conventional wisdom, well captured recently by Ethan Mollick, is that LLMs are advancing exponentially. A few days ago, in very popular blog post, Mollick claimed that “the current best estimates of the rate of improvement in Large Language models show capabilities doubling every 5 to 14 months”:
Hopefully we’re close to realizing generative LLMs are essentially a fad, neat technology but not enough horsepower to do the real interesting stuff