Im not in the loop about what people are expecting of ai or what state of the art models can do. But here's my review.
Ive found chatgpt useful for code snippets, rewording paragraphs, writing emails, fun images. It saves time, but I still need to adjust things. I also use an llm for code suggestions, which I love.
I can not use it for things I don't already understand well. Whenever I try to diagnose issues in my Linux computer, i feel like I get dragged down tangents and I get confused and after many messages it completely forgets what we're doing. I would need to already know how things work to be able to navigate this.
It doesn't do well with niche information. I haven't been able to have it make me a functional EDH-deck. I have not been able to get basic information about my not-so-niche field of research. It gets things wrong too often for me to trust it as a source of information.
Overall ive found it very useful for the things I've found it useful. I understand why it fails at the other tasks. But I assume that someone better equipped than me could prompt-engineer, or adjust the model somehow, to make it useful for those tasks as well.
But ive also heard of really impressive uses, like alphago and alphafold. Im sure there are more recent examples.
But I honestly don't understand what people are envisioning that these ai are supposed to do. Im probably just ignorant about the state of the art, but it seems absurd to me that an ai could tell you how to make government efficient by just throwing existing data at it, to use a topical example.
This is going into my Doran deck