this post was submitted on 18 May 2025
229 points (100.0% liked)

Ask Lemmy

33378 readers
1505 users here now

A Fediverse community for open-ended, thought provoking questions


Rules: (interactive)


1) Be nice and; have funDoxxing, trolling, sealioning, racism, and toxicity are not welcomed in AskLemmy. Remember what your mother said: if you can't say something nice, don't say anything at all. In addition, the site-wide Lemmy.world terms of service also apply here. Please familiarize yourself with them


2) All posts must end with a '?'This is sort of like Jeopardy. Please phrase all post titles in the form of a proper question ending with ?


3) No spamPlease do not flood the community with nonsense. Actual suspected spammers will be banned on site. No astroturfing.


4) NSFW is okay, within reasonJust remember to tag posts with either a content warning or a [NSFW] tag. Overtly sexual posts are not allowed, please direct them to either [email protected] or [email protected]. NSFW comments should be restricted to posts tagged [NSFW].


5) This is not a support community.
It is not a place for 'how do I?', type questions. If you have any questions regarding the site itself or would like to report a community, please direct them to Lemmy.world Support or email [email protected]. For other questions check our partnered communities list, or use the search function.


6) No US Politics.
Please don't post about current US Politics. If you need to do this, try [email protected] or [email protected]


Reminder: The terms of service apply here too.

Partnered Communities:

Tech Support

No Stupid Questions

You Should Know

Reddit

Jokes

Ask Ouija


Logo design credit goes to: tubbadu


founded 2 years ago
MODERATORS
 

Lots of people on Lemmy really dislike AI’s current implementations and use cases.

I’m trying to understand what people would want to be happening right now.

Destroy gen AI? Implement laws? Hoping all companies use it for altruistic purposes to help all of mankind?

Thanks for the discourse. Please keep it civil, but happy to be your punching bag.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 87 points 2 months ago (6 children)

I want people to figure out how to think for themselves and create for themselves without leaning on a glorified Markov chain. That's what I want.

[–] [email protected] 31 points 2 months ago (2 children)

AI people always want to ignore the environmental damage as well...

Like all that electricity and water are just super abundant things humans have plenty of.

Everytime some idiot asks AI instead of googling it themselves the planet gets a little more fucked

[–] [email protected] 7 points 2 months ago (2 children)

Are you not aware that Google also runs on giant data centers that eat enormous amounts of power too?

[–] [email protected] 7 points 2 months ago* (last edited 2 months ago) (1 children)

~~Multiple things can be bad at the same time, they don't all need to be listed every time any one bad thing is mentioned.~~

[–] [email protected] 5 points 2 months ago (1 children)

I wasn't listing other bad things, this is not a whataboutism, this was a specific criticism of telling people not to use one thing because it uses a ton of power/water when the thing they're telling people to use instead also uses a ton of power/water.

[–] [email protected] 4 points 2 months ago (1 children)

Yeah, you're right. I think I misread your/their comment initially or something. Sorry about that.

And ai is in search engines now too, so even if asking chatfuckinggpt uses more water than google searching something used to, google now has its own additional fresh water resource depletor to insert unwanted ai into whatever you look up.

We're fucked.

[–] [email protected] 3 points 2 months ago

Fair enough.

Yeah, the intergration of AI with chat will just make it eat even more power, of course.

[–] [email protected] 2 points 2 months ago (1 children)

This is like saying a giant truck is the same as a civic for a 2 hr commute ...

[–] [email protected] 4 points 2 months ago (1 children)

Per: https://www.rwdigital.ca/blog/how-much-energy-do-google-search-and-chatgpt-use/

Google search currently uses 1.05GWh/day. ChatGPT currently uses 621.4MWh/day

The per-entry cost for google is about 10% of what it is for GPT but it gets used quite a lot more. So for one user 'just use google' is fine, but since are making proscriptions for all of society here we should consider that there are ~300 million cars in the US, even if they were all honda civics they would still burn a shitload of gas and create a shitload of fossil fuel emissions. All I'm saying if the goal is to reduce emissions we should look at the big picture, which will let you understand that taking the bus will do you a lot better than trading in your F-150 for a Civic.

[–] [email protected] 4 points 2 months ago* (last edited 2 months ago)

Google search currently uses 1.05GWh/day. ChatGPT currently uses 621.4MWh/day

....

And oranges are orange

It doesn't matter what the totals are when people are talking about one or the other for a single use.

Less people commute to work on private jets than buses, are you gonna say jets are fine and buses are the issue?

Because that's where your logic ends up

[–] [email protected] 3 points 1 month ago* (last edited 1 month ago)

This is my #1 issue with it. My work is super pushing AI. The other day I was trying to show a colleague how to do something in teams and as I'm trying to explain to them (and they're ignoring where I'm telling them to click) they were like "you know, this would be a great use of AI to figure it out!".

I said no and asked them to give me their fucking mouse.

People are really out there fucking with extremely powerful wasteful AI for something as stupid as that.

[–] [email protected] 9 points 2 months ago (2 children)

People haven’t ”thought for themselves” since the printing press was invented. You gotta be more specific than that.

[–] [email protected] 26 points 2 months ago (1 children)

Ah, yes, the 14th century. That renowned period of independent critical thought and mainstream creativity. All downhill from there, I tell you.

[–] [email protected] 5 points 2 months ago (2 children)

Independent thought? All relevant thought is highly dependent of other people and their thoughts.

That’s exactly why I bring this up. Having systems that teach people to think in a similar way enable us to build complex stuff and have a modern society.

That’s why it’s really weird to hear this ”people should think for themselves” criticism of AI. It’s a similar justification to antivaxxers saying you ”should do your own research”.

Surely there are better reasons to oppose AI?

[–] [email protected] 10 points 2 months ago (1 children)

I agree on the sentiment, it was just a weird turn of phrase.

Social media has done a lot to temper my techno-optimism about free distribution of information, but I'm still not ready to flag the printing press as the decay of free-thinking.

[–] [email protected] 1 points 2 months ago (1 children)

Things are weirder than they seem on the surface.

A math professor collegue of mine calls extremely restrictive use of language ”rigor”, for example.

[–] [email protected] 6 points 2 months ago* (last edited 2 months ago) (1 children)

The point isn't that it's restrictive, the point is that words have precise technical meanings that are the same across authors, speakers, and time. It's rigorous because of that precision and consistency, not just because it's restrictive. It's necessary to be rigorous with use of language in scientific fields where clear communication is difficult but important to get right due to the complexity of the ideas at play.

[–] [email protected] 1 points 2 months ago

Yeah sure buddy.

Have you tried to shoehorn real life stuff into mathematical notation? It is restrictive. You have pre-defined strict boxes that don’t have blurry lines. Free form thoughts are a lot more flexible than that.

Consistency is restrictive. I don’t know why you take issue with that.

[–] [email protected] 10 points 2 months ago (1 children)

The usage of "independent thought" has never been "independent of all outside influence", it has simply meant going through the process of reasoning--thinking through a chain of logic--instead of accepting and regurgitating the conclusions of others without any of one's own reasoning. It's a similar lay meaning as being an independent adult. We all rely on others in some way, but an independent adult can usually accomplish activities of daily living through their own actions.

[–] [email protected] 1 points 2 months ago (1 children)

Yeah but that’s not what we are expecting people to do.

In our extremely complicated world, most thinking relies on trusting sources. You can’t independently study and derive most things.

Otherwise everybody should do their own research about vaccines. But the reasonable thing is to trust a lot of other, more knowledgeable people.

[–] [email protected] 2 points 1 month ago* (last edited 1 month ago) (1 children)

My comment doesn't suggest people have to run their own research study or develop their own treatise on every topic. It suggests people have make a conscious choice, preferably with reasonable judgment, about which sources to trust and to develop a lay understanding of the argument or conclusion they're repeating. Otherwise you end up with people on the left and right reflexively saying "communism bad" or "capitalism bad" because their social media environment repeats it a lot, but they'd be hard pressed to give even a loosly representative definition of either.

[–] [email protected] 1 points 1 month ago (1 children)

This has very little to do with the criticism given by the first commenter. And you can use AI and do this, they are not in any way exclusive.

[–] [email protected] 1 points 1 month ago (1 children)

This has very little to do with the criticism given by the first commenter.

How do? What would your alternative assertion be on the topic?

[–] [email protected] 1 points 1 month ago (1 children)

think for themselves and create for themselves without leaning on a glorified Markov chain

If you think your comment and this are the same thing, then I don't know what to say.

[–] [email protected] 1 points 1 month ago (1 children)

Well you didn't respond to my questions and you're vaguely referencing our other comments instead. It's not effective communication and leads me to think you didn't understand my comments. You seem to be into math, so I'll put it this way,

Be explicit, show your work: premises-->arguments-->conclusion

[–] [email protected] 1 points 1 month ago* (last edited 1 month ago) (1 children)

Well I first replied to that first comment. Then people started making completely different claims and the point got lost in the sauce.

Edit: why should I take the time to formulate my thoughts well if you have demonstrated that you don’t give even the slightest hint of good faith to understand what I’m saying?

[–] [email protected] 1 points 1 month ago (1 children)

Ah, I haven't looked at others' responses. I can see how responding to many different people gets messy.

But to answer your question, because I took the time to formulate my thoughts for you, and I responded directly to things you said in your comments. I also asked you directly "How so? What's your alternative assertion." Which was a good faith attempt to better understand what you meant.

[–] [email protected] 1 points 1 month ago (1 children)

Well, I do consider this post, as a rephrasing of

thinking through a chain of logic instead of accepting and regurgitating the conclusions of others without any of one’s own reasoning

not made in good faith. You don't engage with the point I'm making at all. Instead, you pivot from understanding the logic to making sure the sources are trustworthy. Which is a fair standard for critical thought or whatever, but definitely not what the original contention of the first commenter was. Which was heavily upvoted (=a popular opinition?), and which originally I replied to.

Also, hearing "How so? What’s your alternative assertion" after ten comments worth of people going out their way to misunderstand my point, presumably because they dislike AI, is not motivating.

[–] [email protected] 1 points 1 month ago* (last edited 1 month ago) (1 children)

OP: I want people to think for ourselves.

My understanding of your point: People have never done that because no thought is truly independent. Modern complexity relies on thought that builds upon others.

My point: Sure, but that's also a narrow and ungenerous interpretation of the term "independent thought" as per OP's usage. It's closer to critical thought than silo'd thought developed from the ground up.

[–] [email protected] 1 points 1 month ago

Modern thought not only relies on thought built upon other people, it relies on trusting textbooks, data aggregators like weather apps, google search results, bus route apps, wikipedia, forum posts, etc. etc.

I don’t think it’s ungenerous at all to question whether are LLMs really any different in this regard. You take in information from an imperfect automated source, just as we’ve done for a really long time, depending on the definition.

The no thought is truly independent is also a bit of a strawman. The point was, the more complex technology you have, the more the same ideas spread and thought is harmonized (which is good in some ways, standardization makes things easier).

[–] [email protected] 6 points 2 months ago

Speak for yourself.

[–] [email protected] 6 points 2 months ago

Maybe if the actual costs—especially including environmental costs from its energy use—were included in each query, we’d start thinking for ourselves again. It’s not worth it for most things it’s used for at the moment

[–] [email protected] 4 points 2 months ago (1 children)

So your argument against AI is that it's making us dumb? Just like people have claimed about every technology since the invention of writing? The essence of the human experience is change, we invent new tools and then those tools change how we interact with the world, that's how it's always been, but there have always been people saying the internet is making us dumb, or the TV, or books, or whatever.

[–] [email protected] 13 points 2 months ago (1 children)

Get back to me after you have a few dozen conversations with people who openly say "Well I asked ChatGPT and it said..." without providing any actual input of their own.

[–] [email protected] 2 points 2 months ago (1 children)

Oh, you mean like people have been saying about books for 500+ years?

[–] [email protected] 11 points 2 months ago* (last edited 2 months ago) (1 children)

Not remotely the same thing. Books almost always have context on what they are, like having an author listed, and hopefully citations if it's about real things. You can figure out more about it. LLMs create confident sounding outputs that are just predictions of what an output should look like based on the input. It didn't reason and doesn't tell you how it generated its response.

The problem is LLMs are sold to people as Artifical Intelligence, so it sounds like it's smart. In actuality, it doesn't think at all. It just generates confident results. It's literally companies selling con(fidence) men as a product, and people fully trust these con men.

[–] [email protected] 1 points 2 months ago (2 children)

Yeah, nobody has ever written a book that's full of bullshit, bad arguments, and obvious lies before, right?

Obviously anyone who uses any technology needs to be aware of the limitations and pitfalls, but to imagine that this is some entirely new kind of uniquely-harmful thing is to fail to understand the history of technology and society's responses to it.

[–] [email protected] 6 points 1 month ago (1 children)

Yeah, nobody has ever written a book that’s full of bullshit, bad arguments, and obvious lies before, right?

Lies are still better than ChatGPT. ChatGPT isn't even capable of lying. It doesn't know anything. It outputs statistically probable text.

[–] [email protected] 1 points 1 month ago (1 children)

How exactly? Bad information is bad information, regardless of the source.

[–] [email protected] 5 points 1 month ago (1 children)

People understand the concept of liars and bad faith actors. People don't seem to understand that facts don't factor into a chatbot's output at all. cf all the replies defending them in this post.

[–] [email protected] 1 points 1 month ago

So that seems like more of a lack-of-understanding problem, not an 'LLMs are bad' problem as it's being portrayed in the larger thread.

[–] [email protected] 1 points 1 month ago (1 children)

You can look up the author and figure out if they're a reliable source of information. Most authors either write bullshit or don't, at least on a particular subject. LLMs are unreliable. Sometimes they return bullshit and sometimes they don't. You never know, but it'll sound just as confident either way. Also, people are lead to believe they're actually thinking about their response, and they aren't. They aren't considering if it's real or not, only if it is a statistically probable output.

[–] [email protected] 1 points 1 month ago* (last edited 1 month ago) (1 children)

You should check your sources when you're googling or using chatGPT too (most models I've seen now cite sources you can check when they're reporting factual stuff), that's not unique to those those things. Yeah LLMs might be more likely to give bad info, but people are unreliable too, they're biased and flawed and often have an agenda, and they are frequently, confidently wrong. Guess who writes books? Mostly people. So until we're ready to apply that standard to all sources of information it seems unreasonable to arbitrarily hold LLMs to some higher standard just because they're new.

[–] [email protected] 1 points 1 month ago (1 children)

most models I've seen now cite sources you can check when they're reporting factual stuff

Maybe online models can, but local has no access to the internet so it can't. However, it's likely generating a response that is predictable that can cite a source, but it could totally make that up. Hopefully people would double check it to make sure it actually is and says what it's claiming, but we both know most won't. Citing a source is just a way to make it look intelligent while it still generates bullshit.

Yeah LLMs might be more likely to give bad info, but people are unreliable too, they're biased and flawed and often have an agenda, and they are frequently, confidently wrong.

You're saying this like they're equal. People put thought into it. LLMs do not. Yes, con men exist. However, not everyone is a con man. You can follow authors who are known to be accurate. You can do the same with LLMs. The problem is consistency. A con man will always be a con man. With an LLM you have no way to know if it's bullshitting this time or not, so you should always assume it's bullshit. In which case, what's the point? However, most people assume it's always honest, because that's what the marketing leads you to believe

[–] [email protected] 1 points 1 month ago

And the people who don't know that you should check LLMs for hallucinations/errors (despite the fact that the press has been screaming that for a year) are definitely self-hosting their own, right? I've done it, it's not hard, but it's certainly not trivial either, and most of these folks would just go 'lol what's a docker?' and stop there. So we're advocating guard-rails for people in a use-case they would never find themselves in.

You’re saying this like they’re equal.

Not as if they're equal, but as if they're both unreliable and should be checked against multiple sources, which is what I've been advocating for since the beginning of this conversation.

The problem is consistency. A con man will always be a con man. With an LLM you have no way to know if it’s bullshitting this time or not

But you don't know a con man is a con man until you've read his book and put some of his ideas in practice and discovered that they're bullshit, same as with an LLM. See also: check against multiple sources.

[–] [email protected] 2 points 2 months ago

I totally understand your point of view. AI seems like the nail in the coffin for digital dominance over humans. It will debilitate people by today’s standards.

Can we compare gen AI tools to any other tools that currently eliminate some level of labor for us to do? e.g. drag and drop programs tools

Where do we draw the line? Can people then think and create in different ways using different tools?

Some GPT’s are already integrating historical conversations. We’re past Markov chain.

[–] [email protected] 1 points 1 month ago

I agree with this sentiment but I don't see it actually convincing anyone of the dangers of AI. It reminds me a lot of how teachers said that calculators won't always be available and we need to learn how to do mental math. That didn't convince anyone then