I think most ML experts (that weren't being paid out the wazoo for saying otherwise) have been saying we're on the tail end of the LLM technology sigma curve. (Basically treating an LLM as a stochastic index, the actual measure of training algorithm quality is query accuracy per training datum)
Even with deepseek's methodology, you see smaller and smaller returns on training input.
Maybe we need to start moving to instances where we won't be banned for saying that stuff.
Isn't the fediverse supposed to be resistant to censorship?