this post was submitted on 22 Aug 2024
877 points (100.0% liked)

Programmer Humor

22950 readers
689 users here now

Welcome to Programmer Humor!

This is a place where you can post jokes, memes, humor, etc. related to programming!

For sharing awful code theres also Programming Horror.

Rules

founded 2 years ago
MODERATORS
 
(page 2) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 6 points 8 months ago

I tried it with my abliterated local model, thinking that maybe its alteration would help, and it gave the same answer. I asked if it was sure and it then corrected itself (maybe reexamining the word in a different way?) I then asked how many Rs in "strawberries" thinking it would either see a new word and give the same incorrect answer, or since it was still in context focus it would say something about it also being 3 Rs. Nope. It said 4 Rs! I then said "really?", and it corrected itself once again.

LLMs are very useful as long as know how to maximize their power, and you don't assume whatever they spit out is absolutely right. I've had great luck using mine to help with programming (basically as a Google but formatting things far better than if I looked up stuff), but I've found some of the simplest errors in the middle of a lot of helpful things. It's at an assistant level, and you need to remember that assistant helps you, they don't do the work for you.

[–] [email protected] 6 points 8 months ago (11 children)

The people here don't get LLMs and it shows. This is neither surprising nor a bad thing imo.

[–] [email protected] 3 points 8 months ago* (last edited 8 months ago)

People who make fun of LLMs most often do get LLMs and try to point out how they tend to spew out factually incorrect information, which is a good thing since many many people out there do not, in fact, "get" LLMs (most are not even acquainted with the acronym, referring to the catch-all term "AI" instead) and there is no better way to make a precaution about the inaccuracy of output produced by LLMs –however realistic it might sound– than to point it out with examples with ridiculously wrong answers to simple questions.

Edit: minor rewording to clarify

load more comments (9 replies)
[–] [email protected] 6 points 8 months ago

To be fair, I knew a lot of people who struggled with word problems in math class.

[–] [email protected] 5 points 8 months ago (2 children)

I hate AI, but here it's a bit understandable why copilot says that. If you ask the same thing to someone else they would surely respond 2 as they my imply you are trying to spell the word, and struggle on whether it's one or two R on the last part.

I know it's a common thing to ask in french when we struggle to spell our overly complicated language, so it doesn't shock me

load more comments (2 replies)
[–] [email protected] 5 points 8 months ago

I stand with chat-gpt on this. Whoever created these double letters is the idiot here.

[–] [email protected] 3 points 8 months ago (1 children)

You’ve discovered an artifact!! Yaaaay

If you ask GPT to do this in a more math questiony way, itll break it down and do it correctly. Just gotta narrow top_p and temperature down a bit

[–] [email protected] 3 points 8 months ago (2 children)

Chatgpt just told me there is one r in elephant.

load more comments (2 replies)
[–] [email protected] 3 points 8 months ago* (last edited 8 months ago)

First mentioned by linus techtip.

i had fun arguing with chatgpt about this

load more comments
view more: ‹ prev next ›