LLMs are specifically and exclusively designed to appeal to investors. once you accept that as fact, the rest just all falls into place.
A Boring Dystopia
Pictures, Videos, Articles showing just how boring it is to live in a dystopic society, or with signs of a dystopic society.
Rules (Subject to Change)
--Be a Decent Human Being
--Posting news articles: include the source name and exact title from article in your post title
--If a picture is just a screenshot of an article, link the article
--If a video's content isn't clear from title, write a short summary so people know what it's about.
--Posts must have something to do with the topic
--Zero tolerance for Racism/Sexism/Ableism/etc.
--No NSFW content
--Abide by the rules of lemmy.world
Yeah Gen AI is a great demo with very limited real world applications. It's like showing a website with pretty graphs and playholder text. It converts potential but in that state has very limited functionality to real people.
Yeah people talk about them replacing employees, but if you had an employee that wrote reports using random made-up facts if they didn't know something, presented them as completely true and insisted they were true even when found out and presented with direct evidence to the contrary, and occasionally would wildly hallucinate and spout gibberish for seemingly no reason at all, I don't think they'd last that long.
They could be president of the United States
This is the best mood succinct comment on this I've ever read.
It told me Biden won the 2024 election. I thought I landed in an alternate timeline.
Can we go there? can you show me the way?
it told me how trump stole the election, and gave a step by step analysis on how they used AI and billionaire backing to do it, how they would have hacked the voting machines, astroturfef movements and groups, use bots to sway opinions, robo calls to confuse voters, and of course a shit load of automated propaganda, among other tactics. the conversation is no longer present on my profile, and i didnt delete it myself.
hallucination or not, thats whack.
It's not all hallucinating. This is why musk wants to buy OpenAI. They want to control this so we can't fact check and unravel their schemes. Anyone can do it.
I promise you this is why they want to control Ai. They want to use it to Devise schemes and not let us use it to expose them.
they already have control. sam altman donated 1 million to trump for deregulation and what later became a part in the stargate contract. yes elon wants it, or at least some power over it if he cant replace them entirely with his own Grok version through XAI. however, the cost of doing business for openAI in the US is essentially giving power to the US governmeny via military usage, surveillance, automated propaganda, and censorship.
it did help in part, to sway the US election. its also being used right now to create their policy plans, and its doing an obviously horrible job.
AI is a powerful tool, in the right hands, with the right prompts. but when you give it to a moron who elects other complicit morons. you get the shit show that has become the united states.
its a shame, but human error and the environement it was being created in was always going to be the downfall of AI.
it is a very flawed "oracle" that is divining an impossible future for the ruling class. and they are too stupid and selfish to think there is any future where they dont have full control.
they will lose, its only a matter of time. and its going to be a very violent awakening.
Here's hoping my friend. Cheers
Yes this was a specific problem with Gemini. They obviously tried to over correct for hallucinations and being too gullible, but it ended up making it certain of its hallucinations.
Hallucination rate for their latest model is 0.7%
https://github.com/vectara/hallucination-leaderboard
Should be <0.1% within a year
Hallucinations when summarizing are significantly lower than when generating code (since the original document would be in context)
It's no more a conman than the average person. The problem is that people consider it an oracle of truth and get shocked when they discover it can be just as deceitful as the next person.
All it takes for people is to run the same question by different AI models get conflicting answers to see the difference and understand that at least one of the answers is wrong.
But alas...
The problem is that people consider it an oracle of truth
Because that’s how it is presented by the con men getting rich off this con.
Just don't use it. Duh.