this post was submitted on 13 Apr 2025
263 points (100.0% liked)

Technology

70031 readers
4185 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
all 38 comments
sorted by: hot top controversial new old
[–] [email protected] 75 points 1 month ago (4 children)

But the only way to learn debugging is to have experience coding. So if we let AI do the coding then all the entry level coding jobs go away and no one learns to debug.

This isn't just a code thing. This is all kinds of professions. AI will kill the entry level which will prevent new people from getting experience which will have downstream effects throughout entire industries.

[–] [email protected] 32 points 1 month ago (1 children)

It already started happening before LLM AI. Have you heard the joke that we were teaching our parents how to use printers and PCs with mouse and keyboard and now we have to do the same with our children? It’s really not a joke. We are the last generation that have seen it all evolving before our eyes, we know the fundamentals of each layer of abstraction the current technology is built upon. It was natural process for us to learn all of this and now suddenly we expect "fresh people" to grasp 50 years or so of progress in 5 or so years?

Interesting times ahead of us.

[–] [email protected] 2 points 1 month ago
[–] [email protected] 15 points 1 month ago (1 children)

Have you used any AI for programming? There is 0 chance entry level jobs will be replaced. AI only works well if what it needs to do is well defined, as a dev that is almost never the case. Also companies understand that to create a senior dev they need a junior dev they can train. Also cooperations do not trust Google, openAI, meta, ect with their intellectual property. My company made it a firedable offense if they catch you uploading IP to an AI.

[–] [email protected] 18 points 1 month ago

Also companies understand that to create a senior dev they need a junior dev they can train.

We live in a world where every company wants people that can hit the ground running, requires 5 years of experience for an entry level job on a language that's only been out for three years. On the job training died long ago.

[–] [email protected] 7 points 1 month ago

The junior devs are my job are way better at debugging than AI, lol. Granted they are top talent hires because no one else can break in these days.

[–] [email protected] 1 points 1 month ago

In my experience, LLMs are good for code snippets and input on best practices.

I use it as a tool to speed up my work, but I don't see it replacing even entry jobs any time soon.

[–] [email protected] 37 points 1 month ago (1 children)

So, AI gets to create problems, and actually capable people get to deal with the consequences. Yeah that sounds about right

[–] [email protected] 27 points 1 month ago (1 children)

And it'll be used to suppress wages, because "you're not making new stuff, just fixing some problems in existing code." That you have to rewrite most of it is conveniently not counted.

That's at least what was tried with movie writers.

[–] [email protected] 18 points 1 month ago (1 children)

Most programmers agree debugging can be harder than writing code, so basically the easy part is automated, but the more challenging and interesting parts, architecture and the debugging remain for programmers. Still it's possible they'll try to sell it to programmers as less work.

[–] [email protected] 13 points 1 month ago* (last edited 1 month ago) (1 children)

but the more challenging and interesting parts, architecture and the debugging remain for programmers

And is made harder for them. Because it turns out the "easy" part is not that easy to do correctly, and if not it just makes maintaining the thing miserable.

[–] [email protected] 8 points 1 month ago

Additionally, as others have said in the thread, programmers learn the skills required for debugging at least partially from writing code. So there goes a big part of the learning curve, turning into a bell curve.

[–] [email protected] 33 points 1 month ago* (last edited 1 month ago)

I’m actually quite enjoying watching the LLM evangelists fall into the trough of despair after their initial inflated expectations of what they thought stochastic text generation would achieve for the business. After a while you get used to the waves of magic bullet solutions that promise to revolutionise the industry but introduce as many new problems as they solve.

[–] [email protected] 23 points 1 month ago (1 children)

So we “fixed” the easiest part of software development (writing code) and now humans have to clean up the AI slop.

I’ll bet this lovely new career field comes with a pay cut.

[–] [email protected] 11 points 1 month ago

I would charge more. Fixing my own code is easier than fixing someone elses code.

I think I might go insane if that was my career.

[–] [email protected] 22 points 1 month ago (1 children)

But trust me Bro, AGI is around the corner. In the meantime have this new groundbreaking feature https://decrypt.co/314380/chatgpt-total-recall-openai-memory-upgrade /s

[–] [email protected] 16 points 1 month ago

LLMs are so fundamentally different to AGI, it's a wonder people believe that balderdash

[–] [email protected] 19 points 1 month ago (1 children)

I'm full luddite on this. And fuck all of us.

[–] [email protected] 2 points 1 month ago

"Give me some good warning message css" was a pretty nice use case. It's a nice tool that's near the importance of Google search.

But you have to know when its answers are good and when they're useless or harmful. That requires a developer.

[–] [email protected] 13 points 1 month ago* (last edited 1 month ago)

"AI" is good for pattern matching, generating boiler plate / template code and text, and generating images. Maybe also translation. That's about it. And it's of course often flawed/inaccurate so it needs human oversight. Everything else is like a sales scam. A very profitable one.

[–] [email protected] 11 points 1 month ago
[–] [email protected] 11 points 1 month ago (1 children)
[–] [email protected] 2 points 1 month ago (1 children)

It’s always the people that don’t have a clue.

It’s also always the people that think they’ll get some benefit out of AI taking over. When they’re absolutely part of the group that’ll be replaced by AI.

[–] [email protected] 1 points 1 month ago

It's a cargo cult. They don't understand, but they like what it promises, so they blindly worship. Sceptics become unbelievers, visionaries become prophets and collateral damages become sacrifices.

They may use different terms, but if some job became obsolete, that's just the price of a better future to them. And when the day of Revelation comes, they'll surely be among the faithful delivered from the shackles of human labour to enjoy the paradise built on this technology. Any day now...

[–] [email protected] 10 points 1 month ago (1 children)

Can AI fix itself so that it gets better at a task? I don't see how that could be possible, it would just fall into a feed back loop where it gets stranger and stranger.
Personally, I will always lie to AI when asked for feed back.

[–] [email protected] 10 points 1 month ago (1 children)

It is worse. People can't even fix AI so it gets better at a task.

[–] [email protected] 4 points 1 month ago

That's been one of the things that has really stumped a team that wanted to go all in on some AI offering. They go to customer evaluations and really there's just nothing they can do about the problems reported. They can try to train and hope for the best, but that likely won't work and could also make other things worse.

[–] [email protected] 6 points 1 month ago

Are those researchers human or is this just an Ai that’s too lazy to do the work?

[–] [email protected] 5 points 1 month ago

Ars Technica would die of an aneurysm if it stopped posting about generative AI for even 30 seconds

as they're the authority on tech, and all they write about is shitty generative AI from 2017, that means shitty generative AI from 2017 is the only tech worth writing about

[–] [email protected] 4 points 1 month ago

the tool can't replace the person or whatever

[–] [email protected] 3 points 1 month ago

Color me shocked.

[–] [email protected] 3 points 1 month ago* (last edited 1 month ago)

Well, now they're just subverting expectations left and right, aren't they!

[–] [email protected] 1 points 4 weeks ago

Laughs in Cobol.