this post was submitted on 11 Jul 2025
358 points (100.0% liked)

Technology

72733 readers
1651 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 9 points 16 hours ago

"Using something that you're not experienced with and haven't yet worked out how to best integrate into your workflow slows some people down"

Wow, what an insight! More at 8!

As I said on this article when it was posted to another instance:

AI is a tool to use. Like with all tools, there are right ways and wrong ways and inefficient ways and all other ways to use them. You can’t say that they slow people down as a whole just because some people get slowed down.

[–] [email protected] 26 points 20 hours ago (3 children)

Explain this too me AI. Reads back exactly what's on the screen including comments somehow with more words but less information Ok....

Ok, this is tricky. AI, can you do this refactoring so I don't have to keep track of everything. No... Thats all wrong... Yeah I know it's complicated, that's why I wanted it refactored. No you can't do that... fuck now I can either toss all your changes and do it myself or spend the next 3 hours rewriting it.

Yeah I struggle to find how anyone finds this garbage useful.

[–] [email protected] 11 points 16 hours ago (1 children)

You shouldn't think of "AI" as intelligent and ask it to do something tricky. The boring stuff that's mostly just typing, that's what you get the LLMs to do. "Make a DTO for this table " "Interface for this JSON "

I just have a bunch of conversations going where I can paste stuff into and it will generate basic code. Then it's just connecting things up, but that's the fun part anyway.

[–] [email protected] 4 points 3 hours ago

Most ides do the boring stuff with templates and code generation for like a decade so that's not so helpful to me either but if it works for you.

[–] [email protected] 6 points 17 hours ago (1 children)

I have asked questions, had conversations for company and generated images for role playing with AI.

I've been happy with it, so far.

[–] [email protected] 2 points 3 hours ago (1 children)

That's kind of outside the software development discussion but glad you're enjoying it.

[–] [email protected] 1 points 9 minutes ago

As a developer

  • I can jot down a bunch of notes and have ai turn it into a reasonable presentation or documentation or proposal
  • zoom has an ai agent which is pretty good about summarizing a meeting. It usually just needs minor corrections and you can send it out much faster than taking notes
  • for coding I mostly use ai like autocomplete. Sometimes it’s able to autocomplete entire code blocks
  • for something new I might have ai generate a class or something, and use it as a first draft where I then make it work
[–] [email protected] 4 points 16 hours ago

Sounds like you just need to find a better way to use AI in your workflows.

Github Copilot in Visual Studio for example is fantastic and offers suggestions including entire functions that often do exactly what you wanted it to do, because it has the context of all of your code (if you give it that, of course).

[–] [email protected] 28 points 1 day ago

no shit. ai will hallucinate shit I’ll hit tab by accident and spend time undoing that or it’ll hijack tab on new lines inconsistently

[–] [email protected] 115 points 1 day ago* (last edited 1 day ago) (6 children)

Experienced software developer, here. "AI" is useful to me in some contexts. Specifically when I want to scaffold out a completely new application (so I'm not worried about clobbering existing code) and I don't want to do it by hand, it saves me time.

And... that's about it. It sucks at code review, and will break shit in your repo if you let it.

[–] [email protected] 2 points 16 hours ago* (last edited 16 hours ago)

I've found it to be great at writing unit tests too.

I use github copilot in VS and it's fantastic. It just throws up suggestions for code completions and entire functions etc, and is easily ignored if you just want to do it yourself, but in my experience it's very good.

Like you said, using it to get the meat and bones of an application from scratch is fantastic. I've used it to make some awesome little command line programs for some of my less technical co-workers to use for frequent tasks, and then even got it to make a nice GUI over the top of it. Takes like 10% of the time it would have taken me to do it - you just need to know how to use it, like with any other tool.

[–] [email protected] 26 points 1 day ago (1 children)

Not a developer per se (mostly virtualization, architecture, and hardware) but AI can get me to 80-90% of a script in no time. The last 10% takes a while but that was going to take a while regardless. So the time savings on that first 90% is awesome. Although it does send me down a really bad path at times. Being experienced enough to know that is very helpful in that I just start over.

In my opinion AI shouldn’t replace coders but it can definitely enhance them if used properly. It’s a tool like everything. I can put a screw in with a hammer but I probably shouldn’t.

[–] [email protected] 10 points 1 day ago (1 children)

Like I said, I do find it useful at times. But not only shouldn't it replace coders, it fundamentally can't. At least, not without a fundamental rearchitecturing of how they work.

The reason it goes down a "really bad path" is that it's basically glorified autocomplete. It doesn't know anything.

On top of that, spoken and written language are very imprecise, and there's no way for an LLM to derive what you really wanted from context clues such as your tone of voice.

Take the phrase "fruit flies like a banana." Am I saying that a piece of fruit might fly in a manner akin to how another piece of fruit, a banana, flies if thrown? Or am I saying that the insect called the fruit fly might like to consume a banana?

It's a humorous line, but my point is serious: We unintentionally speak in ambiguous ways like that all the time. And while we've got brains that can interpret unspoken signals to parse intended meaning from a word or phrase, LLMs don't.

[–] [email protected] 1 points 16 hours ago (1 children)

The reason it goes down a “really bad path” is that it’s basically glorified autocomplete. It doesn’t know anything.

Not quite true - GitHub Copilot in VS for example can be given access to your entire repo/project/etc and it then "knows" how things tie together and work together, so it can get more context for its suggestions and created code.

[–] [email protected] 4 points 8 hours ago* (last edited 8 hours ago) (1 children)

That's still not actually knowing anything. It's just temporarily adding more context to its model.

And it's always very temporary. I have a yarn project I'm working on right now, and I used Copilot in VS Code in agent mode to scaffold it as an experiment. One of the refinements I included in the prompt file to build it is reminders throughout for things it wouldn't need reminding of if it actually "knew" the repo.

  • I had to constantly remind it that it's a yarn project, otherwise it would inevitably start trying to use NPM as it progressed through the prompt.
  • For some reason, when it's in agent mode and it makes a mistake, it wants to delete files it has fucked up, which always requires human intervention, so I peppered the prompt with reminders not to do that, but to blank the file out and start over in it.
  • The frontend of the project uses TailwindCSS. It could not remember not to keep trying to downgrade its configuration to an earlier version instead of using the current one, so I wrote the entire configuration for it by hand and inserted it into the prompt file. If I let it try to build the configuration itself, it would inevitably fuck it up and then say something completely false, like, "The version of TailwindCSS we're using is still in beta, let me try downgrading to the previous version."

I'm not saying it wasn't helpful. It probably cut 20% off the time it would have taken me to scaffold out the app myself, which is significant. But it certainly couldn't keep track of the context provided by the repo, even though it was creating that context itself.

Working with Copilot is like working with a very talented and fast junior developer whose methamphetamine addiction has been getting the better of it lately, and who has early onset dementia or a brain injury that destroyed their short-term memory.

[–] [email protected] 1 points 4 hours ago (1 children)

Adding context is “knowing more” for a computer program.

Maybe it’s different in VS code vs regular VS, because I never get issues like what you’re describing in VS. Haven’t really used it in VS Code.

[–] [email protected] 1 points 3 hours ago

Are you using agent mode?

[–] [email protected] 4 points 21 hours ago

Everyone on Lemmy is a software developer.

[–] [email protected] 20 points 1 day ago (3 children)

I have limited AI experience, but so far that's what it means to me as well: helpful in very limited circumstances.

Mostly, I find it useful for "speaking new languages" - if I try to use AI to "help" with the stuff I have been doing daily for the past 20 years? Yeah, it's just slowing me down.

load more comments (3 replies)
[–] [email protected] 14 points 1 day ago (5 children)

Exactly what you would expect from a junior engineer.

Let them run unsupervised and you have a mess to clean up. Guide them with context and you’ve got a second set of capable hands.

Something something craftsmen don’t blame their tools

[–] [email protected] 60 points 1 day ago (23 children)

AI tools are way less useful than a junior engineer, and they aren't an investment that turns into a senior engineer either.

[–] [email protected] 1 points 16 hours ago* (last edited 16 hours ago)

They're tools that can help a junior engineer and a senior engineer with their job.

Given a database, AI can probably write a data access layer in whatever language you want quicker than a junior developer could.

load more comments (22 replies)
[–] [email protected] 17 points 1 day ago (4 children)

The difference being junior engineers eventually grow up into senior engineers.

load more comments (4 replies)
load more comments (3 replies)
load more comments (1 replies)
[–] [email protected] 79 points 1 day ago (1 children)
[–] [email protected] 3 points 21 hours ago

I agree with the depicted actual developers, but this is still funny

[–] [email protected] 47 points 1 day ago

Fun how the article concludes that AI tools are still good anyway, actually.

This AI hype is a sickness

[–] [email protected] 25 points 1 day ago (1 children)

Writing code is the easiest part of my job. Why are you taking that away?

[–] [email protected] 1 points 6 hours ago

For some of us that’s more useful. I’m currently playing a DevSecOps role and one of the defining characteristics is I need to know all the tools. On Friday, I was writing some Java modules, then some groovy glue, then spent the after writing a Python utility. While im reasonably good about jumping among languages and tools, those context switches are expensive. I definitely want ai help with that.

That being said, ai is just a step up from search or autocomplete, it’s not magical. I’ve had the most luck with it generating unit tests since they tend to be simple and repetitive (also a major place for the juniors to screw up: ai doesn’t know whether the slop it’s pumping out is useful. You do need to guide it and understand it, and you really need to cull the dreck)

[–] [email protected] 25 points 1 day ago (10 children)

I study AI, and have developed plenty of software. LLMs are great for using unfamiliar libraries (with the docs open to validate), getting outlines of projects, and bouncing ideas for strategies. They aren't detail oriented enough to write full applications or complicated scripts. In general, I like to think of an LLM as a junior developer to my senior developer. I will give it small, atomized tasks, and I'll give its output a once over to check it with an eye to the details of implementation. It's nice to get the boilerplate out of the way quickly.

Don't get me wrong, LLMs are a huge advancement and unbelievably awesome for what they are. I think that they are one of the most important AI breakthroughs in the past five to ten years. But the AI hype train is misusing them, not understanding their capabilities and limitations, and casting their own wishes and desires onto a pile of linear algebra. Too often a tool (which is one of many) is being conflated with the one and only solution--a silver bullet--and it's not.

This leads to my biggest fear for the AI field of Computer Science: reality won't live up to the hype. When this inevitably happens, companies, CEOs, and normal people will sour on the entire field (which is already happening to some extent among workers). Even good uses of LLMs and other AI/ML use cases will be stopped and real academic research drying up.

[–] [email protected] 31 points 1 day ago (2 children)

My fear for the software industry is that we'll end up replacing junior devs with AI assistance, and then in a decade or two, we'll see a lack of mid-level and senior devs, because they never had a chance to enter the industry.

[–] [email protected] 16 points 1 day ago (7 children)

That's happening right now. I have a few friends who are looking for entry-level jobs and they find none.

It really sucks.

That said, the future lack of developers is a corporate problem, not a problem for developers. For us it just means that we'll earn a lot more in a few years.

load more comments (7 replies)
[–] [email protected] 2 points 19 hours ago

100% agreed. It should not be used as a replacement but rather as an augmentation to get the real benefits.

load more comments (9 replies)
[–] [email protected] 22 points 1 day ago (1 children)

Code reviews take up a lot of time, and if I know a lot of code in a review is AI generated I feel like I'm obliged to go through it with greater rigour, making it take up more time. LLM code is unaware of fundamental things such as quirks due to tech debt and existing conventions. It's not great.

[–] [email protected] 1 points 5 hours ago* (last edited 5 hours ago) (1 children)

Code reviews seem like a good opportunity for an LLM. It seems like they would be good at it. I’ve actually spent the last half hour googling for tools.

I’ve spent literally a month in reviews for this junior guy on one stupid feature, and so much of it has been so basic. It’s a combination of him committing ai slop without understanding or vetting it, and being too junior to consider maintainability or usability. It would have saved so much of my time if ai could have done some of those review cycles without me

[–] [email protected] 1 points 3 hours ago (1 children)

This has been solved for over a decade. Include a linter and static analysis stage in the build pipeline. No code review until the checkbox goes green (or the developer has a specific argument for why a particular finding is a false positive)

[–] [email protected] 1 points 20 minutes ago

Not really.

Linter in the build pipeline is generally not useful because most people won’t give results time or priority. You usually can’t fail the build for lint issues so all it does is fill logs. I usually configure a linter and prettifier in a precommit hook, to shift that left. People are more willing to fix their code in small pieces as they try to commit.

But this is also why SonarQube is a key tool. The scanners are lint-like, and you can even import some lint output. But the important part is it tries to prioritize them, score them, and enforce a quality gate based on them. I usually can’t fail a build for lint errors but SonarQube can if there are too many or too priority, or if they are security related.

But this is not the same as a code review. If an ai can use the code base as context, it should be able to add checks for consistency and maintainability similar to the rest of the code. For example I had a junior developer blindly follow the AI to use a different mocking framework than the rest of the code, for no reason other than it may have been more common in the training data. A code review ai should be able to notice that. Maybe this is too advanced for current ai, but the same guy blindly followed ai to add classes that already existed. They were just different enough that SonarQube didn’t flag is as duplicate code but ai ought to be able to summarize functionality and realize they were the same. Or I wonder if ai could do code organization? Junior guys spew classes and methods everywhere without any effort in organizing like with like, so someone can maintain it all. Or how about style? I hope yo never revisit style wars but when you’re modifying code you really need to follow style and naming of what’s already there. Maybe ai code review can pick up on that

[–] [email protected] 5 points 1 day ago

Great! Less productivity = more jobs, more work security.

[–] [email protected] 14 points 1 day ago (1 children)

I’ve used cursor quite a bit recently in large part because it’s an organization wide push at my employer, so I’ve taken the opportunity to experiment.

My best analogy is that it’s like micro managing a hyper productive junior developer that somehow already “knows” how to do stuff in most languages and frameworks, but also completely lacks common sense, a concept of good practices, or a big picture view of what’s being accomplished. Which means a ton of course correction. I even had it spit out code attempting to hardcode credentials.

I can accomplish some things “faster” with it, but mostly in comparison to my professional reality: I rarely have the contiguous chunks of time I’d need to dedicate to properly ingest and do something entirely new to me. I save a significant amount of the onboarding, but lose a bunch of time navigating to a reasonable solution. Critically that navigation is more “interrupt” tolerant, and I get a lot of interrupts.

That said, this year’s crop of interns at work seem to be thin wrappers on top of LLMs and I worry about the future of critical thinking for society at large.

[–] [email protected] 8 points 1 day ago* (last edited 1 day ago)

That said, this year’s crop of interns at work seem to be thin wrappers on top of LLMs and I worry about the future of critical thinking for society at large.

This is the must frustrating problem I have. With a few exceptions, LLM use seems to be inversely proportional to skill level, and having someone tell me "chatgpt said ___" when asking me for help because clearly chatgpt is not doing it for their problem makes me want to just hang up.

load more comments
view more: next ›