this post was submitted on 10 Oct 2023
166 points (93.2% liked)

Technology

69867 readers
4347 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

‘Overhyped’ generative AI will get a ‘cold shower’ in 2024, analysts predict::Analyst firm CCS Insight predicts generative AI will get a "cold shower" in 2024 as concerns over growing costs replace the "hype" surrounding the technology.

top 29 comments
sorted by: hot top controversial new old
[–] [email protected] 45 points 2 years ago* (last edited 2 years ago) (1 children)

Honest title: lazy analyst pretends to be smart recycling an overused Gartner graph

Reference https://en.wikipedia.org/wiki/Gartner_hype_cycle

[–] [email protected] 18 points 2 years ago

an overhyped thing won't be as hyped in the near future?

who would've thunk

[–] [email protected] 44 points 2 years ago (2 children)

We're getting customers that want to use LLMs to query databases and such. And I fully expect that to work well 95% of the time, but not always, while looking like it always works correctly. And then you can tell customers a hundred times that it's not 100% reliable, they'll forget.

So, at some point, that LLM will randomly run a complete non-sense query, returning data that's so wildly wrong that the customers notice. And precisely that is the moment when they'll realize, holy crap, this thing isn't always reliable?! It's been telling us inaccurate information 5% of the usages?! Why did no one inform us?!?!?!

And then we'll tell them that we did inform them and no, it cannot be fixed. Then the project will get cancelled and everyone lived happily ever after.

Or something. Can't wait to see it.

[–] [email protected] 18 points 2 years ago (2 children)

Would you trust a fresh out of college intern to do it? That's been my metric for relying on LLM's

[–] [email protected] 9 points 2 years ago

Yup this is the way to think about LLMs, infinite eager interns willing to try anything and never trusting themselves to say "I dont know"

[–] [email protected] 5 points 2 years ago (1 children)
[–] [email protected] -1 points 2 years ago (1 children)

I've been speculating that people raving about these things are just bad at their jobs for a bit, I've never been able to get anything useful out of an llm.

[–] [email protected] 1 points 2 years ago

If you have a job that involves diagnosing, or a wide array of different problems that change day to day it's extremely useful. If you do the same thing over and over again it may not be as much.

[–] [email protected] 6 points 2 years ago* (last edited 2 years ago)

You’re right but it’s worse than that. I have been in the game for decades. One bum formula and the whole platform loses credibility. There isn’t a customer on the planet who’ll look at us as 5%.

[–] [email protected] 36 points 2 years ago (5 children)

Seeing people say they’re saving lots of time with LLMs makes me wonder how much menial busywork other people do relative to myself. I find so few things in my day where using these tools wouldn’t just make me a babysitter for a dumb machine.

[–] [email protected] 9 points 2 years ago

It’s great for programming and writing formal messages. I never know where to get started on messages so I give the AI a summary of what I’m trying to say. That gives me a very wordy base to edit to my liking.

[–] [email protected] 7 points 2 years ago (1 children)

It's great for writing latex.

latexify

sum i=0 to n ( x_i dot (nabla f(x)) x e_r) = 0
\[
\sum_{i=0}^{n} \left( x_i \cdot (\nabla f(x)) \times e_r \right) = 0
\]

Also great at postioning images and fixing weird layout issues.

[–] [email protected] -1 points 2 years ago

You don't need a LLM for converting pseudo code to Latex. LLMs surely help at programming (in my experience), but I feel like your example is really giving them justice :p

[–] [email protected] 6 points 2 years ago (1 children)

Depends on what you do. I personally use LLMs to write preliminary code and do cheap world building for d&d. Saves me a ton of time. My brother uses it at a medium-sized business to write performance evaluations... which is actually funny to see how his queries are set up. It's basically the employee's name, job title, and three descriptors. He can do in 20 minutes what used to take him all day.

[–] [email protected] 5 points 2 years ago (1 children)

What your brother is doing is a pretty good example of why this stuff needs to be regulated better. People's performance evaluations are not the kind of thing that these tools are equipped to do properly.

[–] [email protected] 1 points 2 years ago

The manager could have metrics that they feed the AI so it can make it wordy. Some people really like to read a whole page of text about how they are doing.

[–] [email protected] 4 points 2 years ago

I use them to make worksheets for middle school students and to quickly write lesson plans from ideas.

[–] [email protected] 3 points 2 years ago

I use AI all the time in my work. With one of my tools I can type in a script and have a fully-acted, fully-voiced virtual instructor added to the training we create. Saves us massively in both time and money and increases engagement.

This is how AI will truly sweep through the market. Small improvements, incrementally developed upon, just like every other technology. White collar workers will be impacted first, with blue collar workers second, as the technology continues to develop.

My friend is an AI researcher as part of his overarching role as an analyst for a massive insurance company, and they're developing their own internal LLM. The things AI can do will be absolutely market-shattering over time.

Anyone suggesting AI is just a fad/blip is about as naive as someone saying that about the internet in 1994, in my view.

[–] [email protected] 9 points 2 years ago

2024 headline: "Analyst replaced by generative AI"

[–] [email protected] 6 points 2 years ago (3 children)

In the mean time, I'm using chat gpt at work every day now and I'm able to work much faster because of it.

To me it's next generation search engine. For tech queries it's correct a lot.

[–] [email protected] 9 points 2 years ago (1 children)

Once it stops giving non-existent powershell commands, I'll give it another go, but for now it has wasted enough of my time.

[–] [email protected] 8 points 2 years ago (1 children)

or non-existent switches for linux cli commands

[–] [email protected] 5 points 2 years ago

The worst part is how eager it is to give you a non-existent switch or cli option. Like if it gives you some multi-line solution, all you have to do is say something like "are you sure there's not an option where I can do this in one line?" And it'll be like, "oh yeah you're totally right, you can just use this non-existent thing that totally won't work! Sorry about the confusion!"

[–] [email protected] 6 points 2 years ago (1 children)

Unfortunately that hasn't been my experience, but I'm only using it to find answers for things a couple ddg queries won't solve because traditional search engines are so much faster

[–] [email protected] 3 points 2 years ago

Yeah I think it depends so much on context. For my tech queries it's usually spot on.

[–] [email protected] 5 points 2 years ago

I'm finding it useful for detecting / correcting really simple mistakes, syntax errors and stuff like that.

But I'm finding it mostly useless for anything more complicated.