The conventional wisdom, well captured recently by Ethan Mollick, is that LLMs are advancing exponentially. A few days ago, in very popular blog post, Mollick claimed that “the current best estimates of the rate of improvement in Large Language models show capabilities doubling every 5 to 14 months”:
The only thing I’d like is fact-checking itself, which is impossible considering how they work (it basically guesses everything). LLMs are close to being useful but I find most of the time they’re not usable because it keeps spouting wrong info.
The only thing I’d like is fact-checking itself, which is impossible considering how they work (it basically guesses everything). LLMs are close to being useful but I find most of the time they’re not usable because it keeps spouting wrong info.