this post was submitted on 01 May 2025
19 points (100.0% liked)

Technology

72733 readers
1635 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 33 points 2 months ago (2 children)

Is environmental impact on the top of anyones list for why they don't like ChatGPT? It's not on mine nor on anyones I have talked to.

The two most common reasons I hear are 1) no trust in the companies hosting the tools to protect consumers and 2) rampant theft of IP to train LLM models.

The author moves away from strict environmental focus despite claims to the contrary in their intro,

This post is not about the broader climate impacts of AI beyond chatbots, or about whether AI is bad for other reasons

[...]

Other Objections, This is all a gimmick anyway. Why not just use Google? ChatGPT doesn’t give better information

... yet doesn't address the most common criticisms.

Worse, the author accuses anyone who pauses to think of the negatives of ChatGPT of being absurdly illogical.

Being around a lot of adults freaking out over 3 Wh feels like I’m in a dream reality. It has the logic of a bad dream. Everyone is suddenly fixating on this absurd concept or rule that you can’t get a grasp of, and scolding you for not seeing the same thing. Posting long blog posts is my attempt to get out of the weird dream reality this discourse has created.

IDK what logical fallacy this is but claiming people are "freaking out over 3Wh" is very disingenuous.

Rating as basic content: 2/10, poor and disingenuous argument

Rating as example of AI writing: 5/10, I've certainly seen worse AI slop

[–] [email protected] 4 points 2 months ago (2 children)

The two most common reasons I hear are 1) no trust in the companies hosting the tools to protect consumers and 2) rampant theft of IP to train LLM models.

My reason is that you can't trust the answers regardless. Hallucinations are a rampant problem. Even if we managed to cut it down to 1/100 query will hallucinate, you can't trust ANYTHING. We've seen well trained and targeted AIs that don't directly take user input (so can't be super manipulated) in google search results recommending that people put glue on their pizzas to make the cheese stick better... or that geologists recommend eating a rock a day.

If a custom tailored AI can't cut it... the general ones are not going to be all that valuable without significant external validation/moderation.

[–] [email protected] 3 points 2 months ago (1 children)

There is also the argument that a downpour of AI generated slop is making the Internet in general less usable, hurting everyone (except the slop makers) by making true or genuine information harder to find and verify.

[–] [email protected] 1 points 2 months ago

What exactly is the argument?

[–] [email protected] 1 points 2 months ago (1 children)

Basically no. What you're calling tailored AI is actually low cost AI. You'll be hard pressed, on the other hand, to get ChatGPT o3 to hallucinate at all

[–] [email protected] 3 points 2 months ago (1 children)

No, not basically no.

https://mashable.com/article/openai-o3-o4-mini-hallucinate-higher-previous-models

By OpenAI's own testing, its newest reasoning models, o3 and o4-mini, hallucinate significantly higher than o1.

Stop spreading misinformation. The company itself acknowledges that it hallucinates more than previous models.

[–] [email protected] 1 points 2 months ago

I stand corrected thank you for sharing

I was commenting based on anecdotal experience and I didn't know where was a test specifically for this

I do notice that o3 is more overconfident and tends to find a source online from some forum and treat it as gospel

Which, while not correct, I would not treat as hallucination

[–] [email protected] 4 points 2 months ago (2 children)

Thank you for your considered and articulate comment

What do you think about the significant difference in attitude between comments here and in (quite serious) programming communities like https://lobste.rs/s/bxixuu/cheat_sheet_for_why_using_chatgpt_is_not

Are we in different echo chambers? Is chatgpt a uniquely powerful tool for programmers? Is social media a fundamentally Luddite mechanism?

[–] [email protected] 7 points 2 months ago* (last edited 2 months ago) (1 children)

I'm curious if you can articulate the difference between being critical of how a particular technology is owned and managed versus being a Luddite?

[–] [email protected] 1 points 2 months ago

I think I'm on board with arguing against how LLMs are being owned and managed, so I don't really have much to say

[–] [email protected] 5 points 2 months ago (1 children)

I would say GitHub copilot ( that uses a gpt model ) uses more Wh than chatgpt, because it gets blasted more queries on average because the "AI" autocomplete just triggers almost every time you stop typing or on random occasions.

[–] [email protected] 1 points 2 months ago (1 children)

I don't think this answers the question

[–] [email protected] 1 points 2 months ago (1 children)

I don’t think this answers the question

They're specifically showing you that in the use case you asked about the assertions must change. Your question is bad for the case that you're specifically asking about.

So no, it doesn't answer the question... But your question has a bunch more caveats that must be accounted for that you're just straight up missing.

[–] [email protected] 1 points 2 months ago (1 children)

No that is not how reasoned debate works, you have to articulate your argument lest you're just sloppily babbling talking points

[–] [email protected] 1 points 2 months ago (1 children)

If the premise of your argument is fundamentally flawed, then you're not having a reasoned debate. You just a zealot.

[–] [email protected] 1 points 2 months ago (1 children)

Please articulate why the premise of my argument is fundamentally flawed

[–] [email protected] 1 points 2 months ago (1 children)

I would say GitHub copilot ( that uses a gpt model ) uses more Wh than chatgpt, because it gets blasted more queries on average because the “AI” autocomplete just triggers almost every time you stop typing or on random occasions.

They did... You just refuse to acknowledge it. It's no longer a discussion of simply 3Wh when GitHub copilot is making queries every time you pause typing. It could easily equate to hundreds or even thousands of queries a day (if not rate limited). That fully changes the scope of the argument.

[–] [email protected] 1 points 2 months ago (1 children)

GitHub copilot is not chatgpt

[–] [email protected] 2 points 2 months ago (1 children)

Yet again... You fundamentally have the wrong answer...

https://en.wikipedia.org/wiki/GitHub_Copilot

GitHub Copilot is a code completion and automatic programming tool developed by GitHub and OpenAI

https://github.com/features/copilot

GitHub copilot was literally developed WITH OpenAI the creators of ChatGPT... and you can run o1, o3, o4 directly in there.

https://docs.github.com/en/copilot/using-github-copilot/ai-models/changing-the-ai-model-for-copilot-code-completion

By default, Copilot code completion uses the GPT-4o Copilot, a fine-tuned GPT-4o mini based large language model (LLM).

It defaults to 4o mini.

[–] [email protected] 1 points 2 months ago

Thank you

None of this was true of copilot for years, but I stand corrected as for the current state of affairs