this post was submitted on 09 Feb 2025
64 points (100.0% liked)

TechTakes

1749 readers
79 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
top 18 comments
sorted by: hot top controversial new old
[–] jaschop@awful.systems 24 points 1 month ago* (last edited 1 month ago) (1 children)

While browsing the references of the paper, I found such a perfect evisceration of GenAI.

We have confused what we can write down with what we usefully know and compounded the error by supposing that because computers can help us write down more they can obviously help us know more.

The marks are on the knowledge worker - Kidd, Alison

That's from 1994 folks, they were talking about the wonder of relational databases.

[–] bitofhope@awful.systems 7 points 1 month ago

Big Data was never exactly my fave, but I still liked them better than Genai after he went solo. Some people just never learn that it's never about the size, but how you use it.

Whoops, I dropped my monster Hadoop that I use for my magnum datalake.

[–] remotelove@lemmy.ca 21 points 1 month ago (2 children)

Qualitatively, GenAI shifts the nature of critical thinking toward information verification, response integration, and task stewardship.

Information verification is super important and probably just as important as raw critical thinking. However, when a person is stuck only validating shit output from genai, I could see that as a negative.

[–] dgerard@awful.systems 16 points 1 month ago

and as the paper details, they don't do that either

[–] 3dmvr@lemm.ee 5 points 1 month ago* (last edited 1 month ago)

I get forced to do more critical thinking than I want to faster than I normally would, since im getting the fast responses, its tiring, if I do something myself I go at my own pace, with ai theres always more stuff to check and do and think about, so I burn out

[–] HK65@sopuli.xyz 2 points 1 month ago (1 children)

It's not stupid people, just people rightfully not giving a fuck IMO.

[–] dgerard@awful.systems 12 points 1 month ago

read the paper, they're very stupid also

[–] kitnaht@lemmy.world 1 points 1 month ago (4 children)

It was already known that “users with access to GenAI tools produce a less diverse set of outcomes for the same task.”

Why is this portrayed as a bad thing? Correct answers are correct answers. The only thing LLMs typically are bad at, are things that are seldom discussed or have some ambiguity behind them. So long as users understand the limitations of AI and understand when and where to trust them - then why is their diversity in output a bad thing?

Regularly we seek uniformity in output in order to better handle its output in tasks further down. I don't see this as a bad thing at all.

[–] Amoeba_Girl@awful.systems 15 points 1 month ago

The only thing LLMs typically are bad at, are things that are seldom discussed or have some ambiguity behind them.

yeah no wonder you're a racist cunt

[–] dgerard@awful.systems 14 points 1 month ago* (last edited 1 month ago) (1 children)

why don't you look at the paper then and find out

Correct answers are correct answers.

you should be so lucky

So long as users understand the limitations of AI

this isn't those people

[–] kitnaht@lemmy.world 1 points 1 month ago* (last edited 1 month ago) (2 children)

I have read the paper, how about not immediately jumping to the condescending, patronizing tone?

Also, you didn't answer the question. It simply says "users with access to GenAI tools". You've added your own qualifications separate from the question at hand.

[–] self@awful.systems 15 points 1 month ago (1 children)

how about you go fuck yourself

The only thing LLMs typically are bad at

is everything. including summarizing research since it’s pretty fucking obvious you didn’t read shit. now fuck off

[–] self@awful.systems 14 points 1 month ago (2 children)

also, holy fuck their post history is essentially nothing but unsubtle dogwhistles and pro-AI garbage

[–] dgerard@awful.systems 12 points 1 month ago

lol the nazi sees nothing wrong with LLMs of course

[–] froztbyte@awful.systems 8 points 1 month ago

......I did it again. I looked.

oof.

[–] swlabr@awful.systems 15 points 1 month ago

The condescension and patronisation is well deserved. Your question is answered in the fucking title of the paper.

The Impact of Generative AI on Critical Thinking

If you’d ever engaged in critical thinking, then maybe we could have avoided this exercise.

[–] froztbyte@awful.systems 10 points 1 month ago

others have said the bits that matter already, but for my part: what in the fuck kind of post is this

[–] V0ldek@awful.systems 8 points 1 month ago

Correct answers are correct answers. The only thing LLMs typically are bad at, are things that are seldom discussed or have some ambiguity behind them.

Lol what, how many questions you ask in your life are entirely unambiguous and devoid of nuance? That sounds like a you issue.