this post was submitted on 03 Jun 2025
287 points (100.0% liked)

Technology

71314 readers
5865 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 25 points 1 week ago (1 children)

They’ll have jobs right up to the introduction of the next gen AI.

[–] [email protected] 18 points 1 week ago (2 children)

That might be a while. AI cannibalizing itself is a real problem right now and it's only going to get worse.

[–] [email protected] 9 points 1 week ago (1 children)
[–] [email protected] 17 points 1 week ago (2 children)

pretty much, AI (LLMs specifically) are just fancy statistical models which means that when they ingest data without reasoning behind it (think the many hallucinations of AI our brains manage to catch and filter out) it corrupts the entire training process. The problem is that AI can not distinguish other AI text from human text anymore so it just ingests more and more "garbage" which leads to worse results. There's a reason why progress in the AI models has almost completely stalled compared to when this craze first started: the companies have an increasingly hard time actually improving the models because there is more and more garbage in the training data.

[–] [email protected] 15 points 1 week ago* (last edited 1 week ago)

There's actually a lot of human intervention in the mix. Data labelers for source data, also domain experts who will rectify answers after a first layer of training, some layers of prompts to improve common answers. Without those domain experts, the LLM would never have the nice looking answers we are getting. I think the human intervention is going to increase to counter the AI pollution in the data sources. But it may not be economically viable anymore eventually.

This is a nice deep dive of the different steps to make today's LLMs: https://youtube.com/watch?v=7xTGNNLPyMI

[–] [email protected] 8 points 1 week ago (1 children)

The obvious follow up is, how can I, help to hasten the decline

[–] [email protected] 9 points 1 week ago* (last edited 1 week ago) (2 children)

Make an account on twitter and reddit, and use chatgpt to generate content. AI models will scrape the data and use it to for training, basically Ouroboros also known as model collapse.

[–] [email protected] 2 points 1 week ago

No need to bother, reddit is already full of entire threads of GPT posts, the megacorps are killing their own product for us.

[–] [email protected] 1 points 1 week ago

Please use Gemini and Bing too. Mix it up a bit.

[–] [email protected] 8 points 1 week ago (1 children)

Let's hope the current ai chokes on the crap it produces and eats afterwards.

[–] [email protected] 6 points 1 week ago (1 children)

Don't forget, there's also people deliberately poisoning ai. Truly doing God's work.

[–] [email protected] 7 points 1 week ago

Yup. Me on my quite big karma account on Stack Overflow: I gave up on it when they decided to sell my answers/questions for AI training. First I wanted to delete my account, but my data would stay. So I started editing my answers to say "fuck ai" (in a nutshell). I got suspended for a couple months " to think about what I did". So I dag deep into my consciousness and came up with a better plan. I went through my answers (and questions) and poisoned them little by little every day bit by bit with errors. After that I haven't visited that crap network anymore. Before all this I was there all the time, had lots of karma (or whatever it was called there). Couldn't care less after the AI crap. I honestly hope, that I helped make the AI, that was and probably still is trained on data that the users didn't consent to be sold, little bit more shitty.