this post was submitted on 03 May 2024
869 points (100.0% liked)

Technology

69109 readers
2183 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 
  • Rabbit R1 AI box is actually an Android app in a limited $200 box, running on AOSP without Google Play.
  • Rabbit Inc. is unhappy about details of its tech stack being public, threatening action against unauthorized emulators.
  • AOSP is a logical choice for mobile hardware as it provides essential functionalities without the need for Google Play.
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 116 points 11 months ago (9 children)

Why are there AI boxes popping up everywhere? They are useless. How many times do we need to repeat that LLMs are trained to give convincing answers but not correct ones. I've gained nothing from asking this glorified e-waste something, pulling out my phone and verifying it.

[–] [email protected] 57 points 11 months ago (1 children)

What I don't get is why anyone would like to buy a new gadget for some AI features. Just develop a nice app and let people run it on their phones.

[–] [email protected] 27 points 11 months ago* (last edited 11 months ago)

That's why though. Because they can monetize hardware. They can't monetize something a free app does.

[–] [email protected] 22 points 11 months ago (1 children)

The answer is "marketing"

They have pushed AI so hard in the last couple of years they have convinced many that we are 1 year away from Terminator travelling back in time to prevent the apocalypse

[–] [email protected] 6 points 11 months ago
  • Incredible levels of hype
  • Tons of power consumption
  • Questionable utility
  • Small but very vocal fanbase

s/Crypto/AI/

[–] [email protected] 13 points 11 months ago* (last edited 11 months ago) (7 children)

I just used ChatGPT to write a 500-line Python application that syncs IP addresses from asset management tools to our vulnerability management stack. This took about 4 hours using AutoGen Studio. The code just passed QA and is moving into production next week.

https://github.com/blainemartin/R7_Shodan_Cloudflare_IP_Sync_Tool

Tell me again how LLMs are useless?

[–] [email protected] 22 points 11 months ago (1 children)

To be honest… that doesn’t sound like a heavy lift at all.

[–] [email protected] 10 points 11 months ago (1 children)

Dream of tech bosses everywhere. Pay an intermediate dev for average level senior output.

[–] [email protected] 13 points 11 months ago* (last edited 11 months ago)

Intermediate? Nah, junior. They're cheaper after all.

But senior devs do a lot more than output code. Sometimes - like Bill Atkinson's famous -2000 line change to Quickdraw - their jobs involve a lot of complex logic and very little actual code output.

[–] [email protected] 14 points 11 months ago

It's a shortcut for experience, but you lose a lot of the tools you get with experience. If I were early in my career I'd be very hesitant relying on it as its a fragile ecosystem right now that might disappear, in the same way that you want to avoid tying your skills to a single companies product. In my workflow it slows me down because the answers I get are often average or wrong, it's never "I'd never thought of doing it that way!" levels of amazing.

[–] [email protected] 14 points 11 months ago (2 children)

You used the right tool for the job, saved you from hours of work. General AI is still a very long ways off and people expecting the current models to behave like one are foolish.

Are they useless? For writing code, no. Most other tasks yes, or worse as they will be confiently wrong about what you ask them.

[–] [email protected] 11 points 11 months ago

I think the reason they're useful for writing code is that there's a third party - the parser or compiler - that checks their work. I've used LLMs to write code as well, and it didn't always get me something that worked but I was easily able to catch the error.

[–] [email protected] 3 points 11 months ago (10 children)

Are they useless?

Only if you believe most Lemmy commenters. They are convinced you can only use them to write highly shitty and broken code and nothing else.

[–] [email protected] 8 points 11 months ago (1 children)

This is my expirence with LLMs, I have gotten it to write me code that can at best be used as a scaffold. I personally do not find much use for them as you functionally have to proofread everything they do. All it does change the work load from a creative process to a review process.

[–] [email protected] 5 points 11 months ago (1 children)

I don't agree. Just a couple of days ago I went to write a function to do something sort of confusing to think about. By the name of the function, copilot suggested the entire contents of the function and it worked fine. I consider this removing a bit of drudgery from my day, as this function was a small part of the problem I needed to solve. It actually allowed me to stay more focused on the bigger picture, which I consider the creative part. If I were a painter and my brush suddenly did certain techniques better, I'd feel more able to be creative, not less.

load more comments (1 replies)
load more comments (9 replies)
[–] [email protected] 5 points 11 months ago (14 children)

It's no sense trying to explain to people like this. Their eyes glaze over when they hear Autogen, agents, Crew ai, RAG, Opus... To them, generative AI is nothing more than the free version of chatgpt from a year ago, they've not kept up with the advancements, so they argue from a point in the distant past. The future will be hitting them upside the head soon enough and they will be the ones complaining that nobody told them what was comming.

load more comments (13 replies)
[–] [email protected] 4 points 11 months ago

Who's going to tell them that "QA" just ran the code through the same AI model and it came back "Looks Good".

:-)

[–] [email protected] 4 points 11 months ago (4 children)

The code is bad and I would not approve this. I don’t know how you think it’s a good example for LLMs.

load more comments (4 replies)
[–] [email protected] 11 points 11 months ago (3 children)

It's not black or white.

Of couse AI hallucinates, but not all that an LLM produces is garbage.

Don't expect a "living" Wikipedia or Google, but, it sure can help with things like coding or translating.

[–] [email protected] 9 points 11 months ago

I don't necessarily disagree. You can certainly use LLMs and achieve something in less time than without it. Numerous people here are speaking about coding and while I had no success with them, it can work with more popular languages. The thing is, these people use LLMs as a tool in their process. They verify the results (or the compiler does it for them). That's not what this product is. It's a standalone device which you talk to. It's supposed to replace pulling out your phone to answer a question.

load more comments (2 replies)
[–] [email protected] 8 points 11 months ago (3 children)

The best convincing answer is the correct one. The correlation of AI answers with correct answers is fairly high. Numerous test show that. The models also significantly improved (especially paid versions) since introduction just 2 years ago.
Of course it does not mean that it could be trusted as much as Wikipedia, but it is probably better source than Facebook.

[–] [email protected] 21 points 11 months ago (12 children)

"Fairly high" is still useless (and doesn't actually quantify anything, depending on context both 1% and 99% could be 'fairly high'). As long as these models just hallucinate things, I need to double-check. Which is what I would have done without one of these things anyway.

load more comments (12 replies)
[–] [email protected] 11 points 11 months ago (3 children)

An LLM has never generated a correct answer to any of my queries.

[–] [email protected] 16 points 11 months ago (1 children)

That seems unlikely, unless "any" means two.

[–] [email protected] 6 points 11 months ago (4 children)

Perhaps the problem is that I never bothered to ask anything trivial enough, but you'd think that two rhyming words starting with 'L" would be simple.

load more comments (4 replies)
[–] [email protected] 5 points 11 months ago (13 children)
load more comments (13 replies)
[–] [email protected] 5 points 11 months ago

I’ve asked GPT4 to write specific Python programs, and more often than not it does a good job. And if the program is incorrect I can tell it about the error and it will often manage to fix it for me.

load more comments (1 replies)
[–] [email protected] 7 points 11 months ago (2 children)

I think it's a delayed development reaction to Amazon Alexa from 4 years ago. Alexa came out, voice assistants were everywhere. Someone wanted to cash in on the hype but consumer product development takes a really long time.

So product is finally finished (mobile Alexa) and they label it AI to hype it as well as make it work without the hard work of parsing wikipedia for good answers.

[–] [email protected] 9 points 11 months ago (1 children)

Alexa is a fundamentally different architecture from the LLMs of today. There is no way that anyone with even a basic understanding of modern computing would say something like this.

[–] [email protected] 4 points 11 months ago* (last edited 11 months ago)

Alexa is a fundamentally different architecture from the LLMs of today.

Which is why I explicitly said they used AI (LLM) instead of the harder to implement but more accurate Alexa method.

Maybe actually read the entire post before being an ass.

[–] [email protected] 5 points 11 months ago

Alexa and Google home came out nearly a decade ago

[–] [email protected] 5 points 11 months ago (1 children)

I have now heard of my first "ai box". I'm on Lemmy most days. Not sure how it's an epidemic...

[–] [email protected] 10 points 11 months ago

I haven't seen much of them here, but I use other media too. E.g, not long ago there was a lot of coverage about the "Humane AI Pin", which was utter garbage and even more expensive.

[–] [email protected] 4 points 11 months ago

I just started diving into the space from a localized point yesterday. And I can say that there are definitely problems with garbage spewing, but some of these models are getting really really good at really specific things.

A biomedical model I saw seemed lauded for it's consistency in pulling relevant data from medical notes for the sake of patient care instructions, important risk factors, fall risk level etc.

So although I agree they're still giving well phrased garbage for big general cases (and GPT4 seems to be much more 'savvy'), the specific use cases are getting much better and I'm stoked to see how that continues.