this post was submitted on 10 Jun 2025
44 points (100.0% liked)

Technology

71586 readers
6974 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

I''m curious about the strong negative feelings towards AI and LLMs. While I don't defend them, I see their usefulness, especially in coding. Is the backlash due to media narratives about AI replacing software engineers? Or is it the theft of training material without attribution? I want to understand why this topic evokes such emotion and why discussions often focus on negativity rather than control, safety, or advancements.

(page 2) 22 comments
sorted by: hot top controversial new old
[–] [email protected] 3 points 1 week ago (1 children)

It's easy to deny it's built on stolen content and difficult to prove. And AI companies know this, and have gotten caught stealing shitty drawings from children and buying user data that should've been private

load more comments (1 replies)
[–] [email protected] 3 points 1 week ago

Dunning-Kruger effect.

Lots of people now think they can be developpers because they did a shitty half working game using vibe coding.

Would you trust a surgeon that rely on ChatGPT ? So why sould you trust LLM to develop programs ? You know that airplane, nuclear power plants, and a LOT of critical infrastructure rely on programs, right ?

[–] [email protected] 3 points 1 week ago

Its not particularly accurate and then there's the privacy concerns

[–] [email protected] 2 points 1 week ago* (last edited 1 week ago)

Karma farming, as everything on any social network, be it centralized or decentralized. I'm not exactly enthusiastic about AI, but I can tell it has its use case (with caution). AI itself is not the problem. Most likely, Corps behind it are (their practices are not always transparent).

[–] [email protected] 2 points 1 week ago

As several have already explained their questions, I will clarify some points.

Not all countries consider AI training using copyrighted material as theft. For example, Japan has allowed AI to be trained with copyrighted material since 2019, and it's strange because that country is known for its strict laws in that regard.

Also, saying that AI can't or won't harm society sells. Although I don't deny the consequences of this technology. But it will only be effective if AI doesn't get better, because then it could be counterproductive.

[–] [email protected] 2 points 1 week ago* (last edited 1 week ago)

"AI" is a pseudo-scientific grift.

Perhaps more importantly, the underlying technologies (like any technology) are already co-opted by the state, capitalism, imperialism, etc. for the purposes of violence, surveillance, control, etc.

Sure, it's cool for a chatbot to summarize stackexchange but it's much less cool to track and murder people while committing genocide. In either case there is no "intelligence" apart from the humans involved. "AI" is primarily a tool for terrible people to do terrible things while putting the responsibility on some ethereal, unaccountable "intelligence" (aka a computer).

[–] [email protected] 2 points 1 week ago

If you don’t hate AI, you’re not informed enough.

It has the potential to disrupt pretty much everything in a negative way. Especially when regulations always lag behind. AI will be abused by corporations in the worst way possible, while also being bad for the planet.

And the people who are most excited about it, tend to be the biggest shitheads. Basically, no informed person should want AI anywhere near them unless they directly control it.

[–] [email protected] 2 points 1 week ago (2 children)

My skepticism is because it’s kind of trash for general use. I see great promise in specialized A.I. Stuff like Deepfold or astronomy situations where the telescope data is coming in hot and it would take years for humans to go through it all.

But I don’t think it should be in everything. Google shouldn’t be sticking LLM summaries at the top. It hallucinates so I need to check the veracity anyway. In medicine, it can help double-check but it can’t be the doctor. It’s just not there yet and might never get there. Progress has kind of stalled.

So, I don’t “hate” any technology. I hate when people misapply it. To me, it’s (at best) beta software and should not be in production anywhere important. If you want to use it for summarizing Scooby Doo episodes, fine. But it shouldn’t be part of anything we rely on yet.

load more comments (2 replies)
[–] [email protected] 1 points 1 week ago* (last edited 1 week ago)

Because so far we only see the negative impacts in human society IMO. Latest news haven't help at all, not to mention how USA is moving towards AI. Every positive of AI, leads to be used in a workplace, which then will most likely lead to lay offs. I may start to think that Finch in POI, was right all along.

edit: They sell us an unfinished product, which we build in a wrong way.

load more comments
view more: ‹ prev next ›