this post was submitted on 30 Jun 2025
240 points (100.0% liked)
TechTakes
2030 readers
179 users here now
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
So for most actual practical software development, writing tests is in fact an entire job in and of itself and its a tricky one because covering even a fraction of the use cases and complexity the software will actually face when deployed is really hard. So simply letting the LLMs brute force trial-and-error their code through a bunch of tests won't actually get you good working code.
AlphaEvolve kind of did this, but it was testing very specific, well defined, well constrained algorithms that could have very specific evaluation written for them and it was using an evolutionary algorithm to guide the trial and error process. They don't say exactly in their paper, but that probably meant generating code hundreds or thousands or even tens of thousands of times to generate relatively short sections of code.
I've noticed a trend where people assume other fields have problems LLMs can handle, but the actually competent experts in that field know why LLMs fail at key pieces.
I think they meant that coders can run their code and find out if it works or not but lawyers would have to stand in front of a judge or other legally powerful entity and discover the hard way that the LLM outputted statements were essentially gobbledygook.
But code that doesn’t crash isn’t necessarily code that works. And even for code made by humans, we sometimes do find out the hard way, and it can sometimes impact an arbitrarily large number of people.
I am fully aware of this. However, in my experience, it is sometimes the IT departments themselves that push these chatbots onto others in the most aggressive way. I don't know whether they found them to be useful for their own purposes (and therefore assume this must apply to everyone else as well) or whether they are just pushing LLMs because this is what management expects them to do.
From experience in an IT-department, I would say mainly a combination of management pressure and need to make security problems manageable by choosing AI tools to push on users before too many users start using third party tools.
Yes, they will create security problems anyway, but maybe, just maybe, users won't copy paste sensitive business documents into third party web pages?
I can see that. It becomes kind of a protection racket: Pay our subscription fees, or data breaches are going to befall you, and you will only have yourself (and your chatbot-addicted employees) to blame.