this post was submitted on 11 Jul 2025
369 points (100.0% liked)

Technology

72733 readers
1631 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 122 points 1 day ago* (last edited 1 day ago) (50 children)

Experienced software developer, here. "AI" is useful to me in some contexts. Specifically when I want to scaffold out a completely new application (so I'm not worried about clobbering existing code) and I don't want to do it by hand, it saves me time.

And... that's about it. It sucks at code review, and will break shit in your repo if you let it.

[–] [email protected] 28 points 1 day ago (5 children)

Not a developer per se (mostly virtualization, architecture, and hardware) but AI can get me to 80-90% of a script in no time. The last 10% takes a while but that was going to take a while regardless. So the time savings on that first 90% is awesome. Although it does send me down a really bad path at times. Being experienced enough to know that is very helpful in that I just start over.

In my opinion AI shouldn’t replace coders but it can definitely enhance them if used properly. It’s a tool like everything. I can put a screw in with a hammer but I probably shouldn’t.

[–] [email protected] 11 points 1 day ago (1 children)

Like I said, I do find it useful at times. But not only shouldn't it replace coders, it fundamentally can't. At least, not without a fundamental rearchitecturing of how they work.

The reason it goes down a "really bad path" is that it's basically glorified autocomplete. It doesn't know anything.

On top of that, spoken and written language are very imprecise, and there's no way for an LLM to derive what you really wanted from context clues such as your tone of voice.

Take the phrase "fruit flies like a banana." Am I saying that a piece of fruit might fly in a manner akin to how another piece of fruit, a banana, flies if thrown? Or am I saying that the insect called the fruit fly might like to consume a banana?

It's a humorous line, but my point is serious: We unintentionally speak in ambiguous ways like that all the time. And while we've got brains that can interpret unspoken signals to parse intended meaning from a word or phrase, LLMs don't.

[–] [email protected] 1 points 18 hours ago (1 children)

The reason it goes down a “really bad path” is that it’s basically glorified autocomplete. It doesn’t know anything.

Not quite true - GitHub Copilot in VS for example can be given access to your entire repo/project/etc and it then "knows" how things tie together and work together, so it can get more context for its suggestions and created code.

[–] [email protected] 4 points 10 hours ago* (last edited 10 hours ago) (1 children)

That's still not actually knowing anything. It's just temporarily adding more context to its model.

And it's always very temporary. I have a yarn project I'm working on right now, and I used Copilot in VS Code in agent mode to scaffold it as an experiment. One of the refinements I included in the prompt file to build it is reminders throughout for things it wouldn't need reminding of if it actually "knew" the repo.

  • I had to constantly remind it that it's a yarn project, otherwise it would inevitably start trying to use NPM as it progressed through the prompt.
  • For some reason, when it's in agent mode and it makes a mistake, it wants to delete files it has fucked up, which always requires human intervention, so I peppered the prompt with reminders not to do that, but to blank the file out and start over in it.
  • The frontend of the project uses TailwindCSS. It could not remember not to keep trying to downgrade its configuration to an earlier version instead of using the current one, so I wrote the entire configuration for it by hand and inserted it into the prompt file. If I let it try to build the configuration itself, it would inevitably fuck it up and then say something completely false, like, "The version of TailwindCSS we're using is still in beta, let me try downgrading to the previous version."

I'm not saying it wasn't helpful. It probably cut 20% off the time it would have taken me to scaffold out the app myself, which is significant. But it certainly couldn't keep track of the context provided by the repo, even though it was creating that context itself.

Working with Copilot is like working with a very talented and fast junior developer whose methamphetamine addiction has been getting the better of it lately, and who has early onset dementia or a brain injury that destroyed their short-term memory.

[–] [email protected] 1 points 6 hours ago (1 children)

Adding context is “knowing more” for a computer program.

Maybe it’s different in VS code vs regular VS, because I never get issues like what you’re describing in VS. Haven’t really used it in VS Code.

[–] [email protected] 1 points 5 hours ago

Are you using agent mode?

load more comments (3 replies)
load more comments (47 replies)