this post was submitted on 21 Mar 2025
1443 points (100.0% liked)

Technology

67987 readers
3179 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 1 week ago (5 children)

People complain about AI possibly being unreliable, then actively root for things that are designed to make them unreliable.

[–] [email protected] 16 points 1 week ago (1 children)

I find this amusing, had a conversation with an older relative who asked about AI because I am "the computer guy" he knows. Explained basically how I understand LLMs to operate, that they are pattern matching to guess what the next token should be based on a statistical probability. Explained that they sometimes hallucinate, or go of on wild tangents due to this and that they can be really good at aping and regurgitating things but there is no understanding simply respinning fragments to try to generate a response that pleases the asker.

He observed, "oh we are creating computer religions, just without the practical aspects of having to operate in the mundane world that have to exist before a real religion can get started. That's good, religions that have become untethered from day to day practical life have never caused problems for anyone."

Which I found scarily insightful.

[–] [email protected] 4 points 1 week ago (1 children)

Oh good.

now I can add digital jihad by hallucinating AI to the list of my existential terrors.

Thank your relative for me.

[–] [email protected] 2 points 1 week ago (1 children)

Not if we go butlerian jihad on them first

[–] [email protected] 2 points 1 week ago

lol, I was gonna say a reverse butlerian jihad but i didnt think many people would get the reference :p

[–] [email protected] 8 points 1 week ago (1 children)

Here's the key distinction:

This only makes AI models unreliable if they ignore "don't scrape my site" requests. If they respect the requests of the sites they're profiting from using the data from, then there's no issue.

People want AI models to not be unreliable, but they also want them to operate with integrity in the first place, and not profit from people's work who explicitly opt-out their work from training.

[–] [email protected] 4 points 1 week ago* (last edited 1 week ago) (1 children)

I'm a person.

I dont want AI, period.

We cant even handle humans going psycho. Last thing I want is an AI losing its shit due from being overworked producing goblin tentacle porn and going full skynet judgement day.

Got enough on my plate dealing with a semi-sentient olestra stain trying to recreate the third reich, as is.

[–] [email protected] 6 points 1 week ago (1 children)

We cant even handle humans going psycho. Last thing I want is an AI losing its shit due from being overworked producing goblin tentacle porn and going full skynet judgement day.

That is simply not how "AI" models today are structured, and that is entirely a fabrication based on science fiction related media.

The series of matrix multiplication problems that an LLM is, and runs the tokens from a query through does not have the capability to be overworked, to know if it's been used before (outside of its context window, which itself is just previous stored tokens added to the math problem), to change itself, or to arbitrarily access any system resources.

[–] [email protected] 1 points 1 week ago (1 children)
[–] [email protected] 2 points 1 week ago (1 children)
  1. Say something blatantly uninformed on an online forum
  2. Get corrected on it
  3. Make reference to how someone is perceived at parties, an entirely different atmosphere from an online forum, and think you made a point

Good job.

[–] [email protected] 1 points 1 week ago* (last edited 1 week ago)
  1. See someone make a comment about a AI going rogue after being forced to produce too much goblin tentacle porn
  2. Get way to serious over the factual capabilities of a goblin tentacle porn generating AI.
  3. Act holier than thou over it while being completely oblivious to comedic hyperbole.

Good job.

Whats next? Call me a fool for thinking Olestra stains are capable of sentience and thats not how Olestra works?

[–] [email protected] 5 points 1 week ago* (last edited 1 week ago)

Maybe it will learn discretion and what sarcasm are instead of being a front loaded google search of 90% ads and 10% forums. It has no way of knowing if what it’s copy pasting is full of shit.

[–] [email protected] 4 points 1 week ago

This will only make models of bad actors who don't follow the rules worse quality. You want to sell a good quality AI model trained on real content instead of other misleading AI output? Just follow the rules ;)

Doesn't sound too bad to me.

[–] [email protected] 3 points 1 week ago

i mean this is just designed to thwart ai bots that refuse to follow robots.txt rules of people who specifically blocked them.