this post was submitted on 19 Mar 2025
864 points (100.0% liked)

Technology

69772 readers
3810 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 287 points 1 month ago (2 children)
[–] [email protected] 85 points 1 month ago* (last edited 1 month ago) (5 children)

I like my project manager, they find me work, ask how I'm doing and talk straight.

It's when the CEO/CTO/CFO speaks where my eyes glaze over, my mouth sags, and I bounce my neck at prompted intervals as my brain retreats into itself as it frantically tosses words and phrases into the meaning grinder and cranks the wheel, only for nothing to come out of it time and time again.

[–] [email protected] 30 points 1 month ago* (last edited 1 month ago) (2 children)

COs are corporate politicians, media trained to only say things which are completely unrevealing and lacking of any substance.

This is by design so that sensitive information is centrally controlled, leaks are difficult, and sudden changes in direction cause the minimum amount of whiplash to ICs as possible.

I have the same reaction as you, but the system is working as intended. Better to just shut it out as you described and use the time to think about that issue you're having on a personal project or what toy to buy for your cat's birthday.

load more comments (2 replies)
[–] [email protected] 13 points 1 month ago* (last edited 1 month ago)

Right, that sweet spot between too less stimuli so your brain just wants to sleep or run away and enough stimuli so you can't just zone out (or sleep).

load more comments (3 replies)
[–] [email protected] 112 points 1 month ago (5 children)

Optimizing AI performance by “scaling” is lazy and wasteful.

Reminds me of back in the early 2000s when someone would say don’t worry about performance, GHz will always go up.

[–] [email protected] 29 points 1 month ago (1 children)

Thing is, same as with GHz, you have to do it as much as you can until the gains get too small. You do that, then you move on to the next optimization. Like ai has and is now optimizing test time compute, token quality, and other areas.

load more comments (1 replies)
load more comments (3 replies)
[–] [email protected] 97 points 1 month ago (3 children)

They're throwing billions upon billions into a technology with extremely limited use cases and a novelty, at best. My god, even drones fared better in the long run.

[–] [email protected] 77 points 1 month ago (2 children)

I mean it's pretty clear they're desperate to cut human workers out of the picture so they don't have to pay employees that need things like emotional support, food, and sleep.

They want a workslave that never demands better conditions, that's it. That's the play. Period.

[–] [email protected] 31 points 1 month ago* (last edited 1 month ago) (2 children)

If this is their way of making AI, with brute forcing the technology without innovation, AI will probably cost more for these companies to maintain infrastructure than just hiring people. These AI companies are already not making a lot of money for how much they cost to maintain. And unless they charge companies millions of dollars just to be able to use their services they will never make a profit. And since companies are trying to use AI to replace the millions they spend on employees it seems kinda pointless if they aren't willing to prioritize efficiency.

It's basically the same argument they have with people. They don't wanna treat people like actual humans because it costs too much, yet letting them love happy lives makes them more efficient workers. Whereas now they don't want to spend money to make AI more efficient, yet increasing efficiency would make them less expensive to run. It's the never ending cycle of cutting corners only to eventually make less money than you would have if you did things the right way.

[–] [email protected] 31 points 1 month ago* (last edited 1 month ago)

Absolutely. It's maddening that I've had to go from "maybe we should make society better somewhat" in my twenties to "if we're gonna do capitalism, can we do it how it actually works instead of doing it stupid?" in my forties.

[–] [email protected] 12 points 1 month ago (1 children)

The oligarchs running these companies have suffered a psychotic break. What the cause exactly is I don't know, but the game theyre playing is a lot less about profits now. They care about control and power over people.

I theorize it has to do with desperation over what they see as an inevitable collapse of the United States and they are hedging their bets on holding onto the reigns of power for as long as possible until they can fuck off to their respective bunkers while the rest of humanity eats itself.

Then, when things settle they can peak their heads out of their hidie holes and start their new Utopian civilization or whatever.

Whatever's going on, profits are not the focus right now. They are grasping at ways to control the masses...and failing pretty miserably I might add...though something tells me that scarcely matters to them.

load more comments (1 replies)
[–] [email protected] 15 points 1 month ago (3 children)

And the tragedy of the whole situation is that they can‘t win because if every worker is replaced by an algorithm or a robot then who‘s going to buy your products? Nobody has money because nobody has a job. And so the economy will shift to producing war machines that fight each other for territory to build more war machine factories until you can’t expand anymore for one reason or another. Then the entire system will collapse like the Roman Empire and we start from scratch.

load more comments (3 replies)
[–] [email protected] 22 points 1 month ago (2 children)

Nah, generative ai is pretty remarkably useful for software development. I've written dozens of product updates with tools like claudecode and cursorai, dismissing it as a novelty is reductive and straight up incorrect

[–] [email protected] 49 points 1 month ago (9 children)
[–] [email protected] 12 points 1 month ago (2 children)

They're all pretty fired up at the update velocity tbh 🤷

[–] [email protected] 27 points 1 month ago* (last edited 1 month ago) (2 children)

Yeah, nothing pleases us more than constant, buggy updates.

[–] [email protected] 13 points 1 month ago (6 children)

Don't be an ass and realize that ai is a great tool for a lot of people. Why is that so hard to comprehend?

[–] [email protected] 29 points 1 month ago* (last edited 1 month ago) (3 children)

It's not hard to comprehend. It's that we literally have jackasses like Sam Altman arguing that if they can't commit copyright violations at an industrial scale and pace that their business model falls apart. Yet, we're still nailing regular people for piracy on an individual scale. As always individuals pay the price and are treated like criminals, but as long as you commit crime big enough and fast enough on an industrial scale, we shake our heads, go "wow" and treat you like a fucking hero.

If the benefits of this technology were evenly distributed the argument might have a leg to stand on, but it is never evenly distributed. It is always used as a way to pay professionals less for work that is "just okay."

When a business buys the tools to use generative AI and they shitcan employees to afford it they have effectively used those employees labor against them to replace them with something lesser. Their labor was exploited to replace them. The people who actually deserve the bonus of generative AI are losing or being expected to be ten times more productive instead of being allowed to cool their heels because they worked hard enough to have this doohickey work for them. No, it's always "line must go up, rich must get richer, fuck the laborers."

I'll stop being an ass about it when people stop burning employees out who already work hard or straight up fire them and replace them with this bullshit when their labor is what allowed the business to afford this bullshit to begin with. No manager or CEO can do all this labor on their own, but they get the fruits of all the labor their employees do as though they did do it all on their own, and it is fucked up.

I don't have a problem with technology that makes our lives easier. I don't have a problem with copyright violations (copyright as it exists is broken. It still needs to exist, just not in its current form).

What I have a problem with is businesses using this as an excuse to work their employees like slaves or replacing the employees that allowed them to afford these tools with these tools.

When everyone who worked hard to afford this stuff gets a paid vacation for helping to afford the tools and then comes back to an easier workload because the tools help that much, I'll stop being a fucking ass about it.

Like I said elsewhere, the bottom line is business owners want a slave that doesn't need things like sleep, food, emotional support, and never pushes back against being abused. I'm tired of people pretending like it's not what businesses want. I'm tired of people pretending this does anything except make already overworked employees bust even more ass.

load more comments (3 replies)
load more comments (5 replies)
load more comments (1 replies)
load more comments (1 replies)
load more comments (8 replies)
[–] [email protected] 15 points 1 month ago (1 children)

As someone starting a small business, it has helped tremendously. I use a lot of image generation.

If that didn’t exist, I’d either has to use crappy looking clip art or pay a designer which I literally can’t afford.

Now my projects actually look good. It makes my first projects look like a highschooler did them last minute.

There are many other uses, but I rely on it daily. My business can exist without it, but the quality of my product is significantly better and the cost to create it is much lower.

[–] [email protected] 29 points 1 month ago (8 children)

Your product is other people's work thrown in a blender.

Congrats.

load more comments (8 replies)
load more comments (1 replies)
[–] [email protected] 76 points 1 month ago* (last edited 1 month ago) (4 children)

It's ironic how conservative the spending actually is.

Awesome ML papers and ideas come out every week. Low power training/inference optimizations, fundamental changes in the math like bitnet, new attention mechanisms, cool tools to make models more controllable and steerable and grounded. This is all getting funded, right?

No.

Universities and such are seeding and putting out all this research, but the big model trainers holding the purse strings/GPU clusters are not using them. They just keep releasing very similar, mostly bog standard transformers models over and over again, bar a tiny expense for a little experiment here and there. In other words, it’s full corporate: tiny, guaranteed incremental improvements without changing much, and no sharing with each other. It’s hilariously inefficient. And it relies on lies and jawboning from people like Sam Altman.

Deepseek is what happens when a company is smart but resource constrained. An order of magnitude more efficient, and even their architecture was very conservative.

load more comments (4 replies)
[–] [email protected] 75 points 1 month ago (6 children)

The actual survey result:

Asked whether "scaling up" current AI approaches could lead to achieving artificial general intelligence (AGI), or a general purpose AI that matches or surpasses human cognition, an overwhelming 76 percent of respondents said it was "unlikely" or "very unlikely" to succeed. 

So they're not saying the entire industry is a dead end, or even that the newest phase is. They're just saying they don't think this current technology will make AGI when scaled. I think most people agree, including the investors pouring billions into this. They arent betting this will turn to agi, they're betting that they have some application for the current ai. Are some of those applications dead ends, most definitely, are some of them revolutionary, maybe

Thus would be like asking a researcher in the 90s that if they scaled up the bandwidth and computing power of the average internet user would we see a vastly connected media sharing network, they'd probably say no. It took more than a decade of software, cultural and societal development to discover the applications for the internet.

[–] [email protected] 20 points 1 month ago (1 children)

It's becoming clear from the data that more error correction needs exponentially more data. I suspect that pretty soon we will realize that what's been built is a glorified homework cheater and a better search engine.

[–] [email protected] 33 points 1 month ago

what's been built is a glorified homework cheater and an ~~better~~ unreliable search engine.

[–] [email protected] 14 points 1 month ago (4 children)

I agree that it's editorialized compared to the very neutral way the survey puts it. That said, I think you also have to take into account how AI has been marketed by the industry.

They have been claiming AGI is right around the corner pretty much since chatGPT first came to market. It's often implied (e.g. you'll be able to replace workers with this) or they are more vague on timeline (e.g. OpenAI saying they believe their research will eventually lead to AGI).

With that context I think it's fair to editorialize to this being a dead-end, because even with billions of dollars being poured into this, they won't be able to deliver AGI on the timeline they are promising.

load more comments (4 replies)
load more comments (4 replies)
[–] [email protected] 60 points 1 month ago* (last edited 1 month ago) (3 children)

Technology in most cases progresses on a logarithmic scale when innovation isn't prioritized. We've basically reached the plateau of what LLMs can currently do without a breakthrough. They could absorb all the information on the internet and not even come close to what they say it is. These days we're in the "bells and whistles" phase where they add unnecessary bullshit to make it seem new like adding 5 cameras to a phone or adding touchscreens to cars. Things that make something seem fancy by slapping buzzwords and features nobody needs without needing to actually change anything but bump up the price.

load more comments (3 replies)
[–] [email protected] 43 points 1 month ago (17 children)

Me and my 5.000 closest friends don't like that the website and their 1.300 partners all need my data.

load more comments (17 replies)
[–] [email protected] 35 points 1 month ago (9 children)

I liked generative AI more when it was just a funny novelty and not being advertised to everyone under the false pretenses of being smart and useful. Its architecture is incompatible with actual intelligence, and anyone who thinks otherwise is just fooling themselves. (It does make an alright autocomplete though).

[–] [email protected] 13 points 1 month ago

The peak of AI for me was generating images Muppet versions of the Breaking Bad cast; it's been downhill since.

load more comments (8 replies)
[–] [email protected] 23 points 1 month ago (4 children)

Meanwhile a huge chunk of the software industry is now heavily using this "dead end" technology 👀

I work in a pretty massive tech company (think, the type that frequently acquires other smaller ones and absorbs them)

Everyone I know here is using it. A lot.

However my company also has tonnes of dedicated sessions and paid time to instruct it's employees on how to use it well, and to get good value out of it, abd the pitfalls it can have

So yeah turns out if you teach your employees how to use a tool, they start using it.

I'd say LLMs have made me about 3x as efficient or so at my job.

[–] [email protected] 43 points 1 month ago* (last edited 1 month ago) (2 children)

Your labor before they had LLMs helped pay for the LLMs. If you're 3x more efficient and not also getting 3x more time off for the labor you put in previously for your bosses to afford the LLMs you got ripped off my dude.

If you're working the same amount and not getting more time to cool your heels, maybe, just maybe, your own labor was exploited and used against you. Hyping how much harder you can work just makes you sound like a bitch.

Real "tread on me harder, daddy!" vibes all throughout this thread. Meanwhile your CEO is buying another yacht.

[–] [email protected] 23 points 1 month ago* (last edited 1 month ago) (6 children)

I am indeed getting more time off for PD

We delivered on a project 2 weeks ahead of schedule so we were given raises, I got a promotion, and we were given 2 weeks to just do some chill PD at our own discretion as a reward. All paid on the clock.

Some companies are indeed pretty cool about it.

I was asked to give some demos and do some chats with folks to spread info on how we had such success, and they were pretty fond of my methodology.

At its core delivering faster does translate to getting bigger bonuses and kickbacks at my company, so yeah there's actual financial incentive for me to perform way better.

You also are ignoring the stress thing. If I can work 3x better, I can also just deliver in almost the same time, but spend all that freed up time instead focusing on quality, polishing the product up, documentation, double checking my work, testing, etc.

Instead of scraping past the deadline by the skin of our teeth, we hit the deadline with a week or 2 to spare and spent a buncha extra time going over everything with a fine tooth comb twice to make sure we didn't miss anything.

And instead of mad rushing 8 hours straight, it's just generally more casual. I can take it slower and do the same work but just in a less stressed out way. So I'm literally just physically working less hard, I feel happier, and overall my mood is way better, and I have way more energy.

[–] [email protected] 13 points 1 month ago

I will say that I am genuinely glad to hear your business is giving you breaks instead of breaking your backs.

load more comments (5 replies)
load more comments (1 replies)
[–] [email protected] 18 points 1 month ago* (last edited 1 month ago) (5 children)

It's not that LLMs aren't useful as they are. The problem is that they won't stay as they are today, because they are too expensive. There are two ways for this to go (or an eventual combination of both:

  • Investors believe LLMs are going to get better and they keep pouring money into "AI" companies, allowing them to operate at a loss for longer That's tied to the promise of an actual "intelligence" emerging out of a statistical model.

  • Investments stop pouring in, the bubble bursts and companies need to make money out of LLMs in their current state. To do that, they need to massively cut costs and monetize. I believe that's called enshttificarion.

load more comments (5 replies)
load more comments (2 replies)
[–] [email protected] 17 points 1 month ago (4 children)

Imo our current version of ai are too generalized, we add so much information into the ai to make them good at everything it all mixes together into a single grey halucinating slop that the ai ends up being good at nothing.

We need to find ways to specialize ai and give said ai a more consistent and concrete personality to move forward.

[–] [email protected] 18 points 1 month ago (3 children)

Imo to make an ai that is truly good at everything we need to have multiple ai all designed to do something different all working together (like the human brain works) instead of making every single ai a personality-less sludge of jack of all trades master of none

load more comments (3 replies)
load more comments (3 replies)
[–] [email protected] 16 points 1 month ago

There are some nice things I have done with AI tools, but I do have to wonder if the amount of money poured into it justifies the result.

[–] [email protected] 15 points 1 month ago* (last edited 1 month ago) (2 children)

The problem is that those companies are monopolies and can raise prices indefinitely to pursue this shitty dream because they got governments in their pockets. Because gov are cloud / microsoft software dependent - literally every country is on this planet - maybe except China / North Korea and Russia. They can like raise prices 10 times in next 10 years and don't give a fuck. Spend 1 trillion on AI and say we're near over and over again and literally nobody can stop them right now.

load more comments (2 replies)
[–] [email protected] 13 points 1 month ago

Pump and dump. That’s how the rich get richer.

[–] [email protected] 12 points 1 month ago

It's because customers don't want it or care for it, it's only the corporations themselves are obsessed with it

load more comments
view more: next ›