Your statement on no way of fact checking is not a 100% correct as developers found ways to ground LLMs, e.g., by prepending context pulled from „real time“ sources of truth (e.g., search engines). This data is then incorporated into the prompt as context data. Well obviously this is kind of cheating and not baked into the LLM itself, however it can be pretty accurate for a lot of use cases.
iamkindasomeone
joined 2 years ago
And some of us aren’t :) the meme is globally speaking.
you think too American!
They still are…cars. We don’t need no more cars on our streets. Yeah, they could help to replace some old combustion cars but they still are worse than public transport and bicycles.
Uncle Roger that you?
Yeah, but this distinction wasn't given in the original comment.
Not always true. Baltic sea is "Ostsee" (East Sea) and North Sea is "Nordsee". Deap sea is "Hohe See" etc. Mediterranean is "Mittelmeer" though..
I don’t quite understand what you mean by extrapolate on information. LLMs have no model of what an information or the truth is. However, factual information can be passed into the context, the way Bing does it.