this post was submitted on 07 Jul 2025
906 points (100.0% liked)

Technology

72788 readers
2887 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 33 points 1 week ago* (last edited 1 week ago) (5 children)

I'd just like to point out that, from the perspective of somebody watching AI develop for the past 10 years, completing 30% of automated tasks successfully is pretty good! Ten years ago they could not do this at all. Overlooking all the other issues with AI, I think we are all irritated with the AI hype people for saying things like they can be right 100% of the time -- Amazon's new CEO actually said they would be able to achieve 100% accuracy this year, lmao. But being able to do 30% of tasks successfully is already useful.

[–] [email protected] 4 points 6 days ago (1 children)

I think this comment made me finally understand the AI hate circlejerk on lemmy. If you have no clue how LLMs work and you have no idea where "AI" is coming from, it just looks like another crappy product that was thrown on the market half-ready. I guess you can only appreciate the absolutely incredible development of LLMs (and AI in general) that happened during the last ~5 years if you can actually see it in the first place.

[–] [email protected] 4 points 6 days ago

The notion that AI is half-ready is a really poignant observation actually. It's ready for select applications only, but it's really being advertised like it's idiot-proof and ready for general use.

[–] [email protected] 2 points 6 days ago (1 children)

Thing is, they might achieve 99% accuracy given the speed of progress. Lots of brainpower is getting poured into LLMs. Honestly, it is soo scary. It could be replacing me...

[–] [email protected] 1 points 6 days ago

yeah, this is why I'm #fuck-ai to be honest.

[–] [email protected] 31 points 1 week ago (7 children)

It doesn't matter if you need a human to review. AI has no way distinguishing between success and failure. Either way a human will have to review 100% of those tasks.

[–] [email protected] 13 points 1 week ago (1 children)

Right, so this is really only useful in cases where either it's vastly easier to verify an answer than posit one, or if a conventional program can verify the result of the AI's output.

[–] [email protected] 5 points 1 week ago (9 children)

It’s usually vastly easier to verify an answer than posit one, if you have the patience to do so.

I'm envisioning a world where multiple AI engines create and check each others' work... the first thing they need to make work to support that scenario is probably fusion power.

load more comments (9 replies)
[–] [email protected] 7 points 1 week ago (1 children)

I have been using AI to write (little, near trivial) programs. It's blindingly obvious that it could be feeding this code to a compiler and catching its mistakes before giving them to me, but it doesn't... yet.

[–] [email protected] 2 points 6 days ago

Agents do that loop pretty well now, and Claude now uses your IDE's LSP to help it code and catch errors in flow. I think Windsurf or Cursor also do that also.

The tooling has improved a ton in the last 3 months.

load more comments (5 replies)
[–] [email protected] 14 points 1 week ago (1 children)

being able to do 30% of tasks successfully is already useful.

If you have a good testing program, it can be.

If you use AI to write the test cases...? I wouldn't fly on that airplane.

[–] [email protected] 4 points 6 days ago
[–] [email protected] 6 points 1 week ago (1 children)
[–] [email protected] 16 points 1 week ago (1 children)

I'm not claiming that the use of AI is ethical. If you want to fight back you have to take it seriously though.

[–] [email protected] 8 points 1 week ago (1 children)

It cant do 30% of tasks vorrectly. It can do tasks correctly as much as 30% of the time, and since it's llm shit you know those numbers have been more massaged than any human in history has ever been.

[–] [email protected] 7 points 1 week ago (37 children)

I meant the latter, not "it can do 30% of tasks correctly 100% of the time."

load more comments (37 replies)