this post was submitted on 26 Feb 2024
65 points (100.0% liked)

Technology

38435 readers
10 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 3 years ago
MODERATORS
all 34 comments
sorted by: hot top controversial new old
[–] [email protected] 25 points 1 year ago* (last edited 1 year ago)

I liked this article, and I think a lot of the commenters here are missing that the general public is treating LLMs as AGI. I have a whole 5-10 minutes I spend on why this is when I present about LLMs.

"The I in LLM stands for Intelligence" is a joke I read (and include in my presentation to hammer the point home). Laymen have no idea what AI or LLMs are, but they expect it to work similarly to human intelligence, since that's the only model they know, and are surprised to learn it doesn't work that way.

Edit: Forgot what I came to the comments to post, before I read everyone else's complaints about this, lol.

A small correction: the Air Canada example wasn't an LLM, it was just an old "dumb" chatbot that was likely sharing outdated policies.

[–] [email protected] 21 points 1 year ago (3 children)

gasp you mean, industry is lying to investors about a new technology to get more investment and creating a false narrative for the public to undermine criticism? Who could have seen this coming!

[–] [email protected] 3 points 1 year ago* (last edited 1 year ago) (1 children)

I'm a scientist entering the industry and can't agree more. Too many lies. There are handful of companies that do deliver, but, generally speaking, many businesses seem to bet on the naivety of investors. Some even do it unintentionally because the bar is simply too low.

But here's the thing. I noticed in my life that there are naive people following weak narratives no matter what. These people can't doubt your arguments. And if you do business, these people are apparently the most solid support.

Edit: it's perhaps also true that this majority of investors forces companies to lie by investing almost solely on LLMs

[–] [email protected] 1 points 1 year ago (1 children)

I think there is just too much faith from the current crop of investors in tech start ups, many got hurt in the past by not investing in things after not getting good answers to “ok but how does this make money” or “can this actually do what you’re claiming”.

And larger more established companies like Google and Amazon are happy to feed the hype for a lot of these trends, particularly when all the new start ups are going to be buying stuff from them, so even if the start ups fail because they can’t make money or don’t do what they claim, the big companies still made money selling them server space, computing time, or huge amounts of data. I think investors who hold stakes in the big companies also lean in to the hype for this reason.

Everyone has a pretty good incentive to lean in to the hype, so they do.

[–] [email protected] 2 points 1 year ago

Until like 3 months ago, I felt the ChatGPT revolution was going on. Every 10-year plan my colleagues in AI research was actually completed in a few weeks(!) by a completely different research team at the opposite side of the planet. The hype was so high that every expert had same plans resulting in surreal competitions.

After that, the LLM businesses entered the b2b space, with all potential customers asking ChatGPT to search information in their pile of documents. That was the next big thing.

We haven't heard back from the pile of garbage so far...

[–] [email protected] 2 points 1 year ago

hustles to find pearls to clutch

[–] [email protected] 17 points 1 year ago (4 children)

I use quotation marks there because what is often referred to as AI today is not whatsoever what the term once described.

The field of AI has been around for decades and covers a wide range of technologies, many of them much "simpler" than the current crop of generative AI. What is often referred to as AI today is absolutely what the term once described, and still does describe.

What people seem to be conflating is the general term "AI" and the more specific "AGI", or Artificial General Intelligence. AGI is the stuff you see on Star Trek. Nobody is claiming that current LLMs are AGI, though they may be a significant step along the way to that.

I may be sounding nitpicky here, but this is the fundamental issue that the article is complaining about. People are not well educated about what AI actually is and what it's good at. It's good at a huge amount of stuff, it's really revolutionary, but it's not good at everything. It's not the fault of AI when people fail to grasp that, no more than it's the fault of the car when someone gets into it and then is annoyed it won't take them to the Moon.

[–] [email protected] 8 points 1 year ago

People are not well educated about what AI actually is and what it’s good at.

And half the reason they're not educated about it is that AI companies are actively and intentionally misinforming them about it. AI companies sell people these products using words like "thinking", "assessing", "reasoning", and "learning", none of which are accurate to AI, but would be to AGI.

[–] [email protected] 2 points 1 year ago

Oop, wish I'd read this comment before mine. 100% right

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago) (1 children)

AGI is the stuff you see on Star Trek.

Clarification. AGI describes Data, Moriarty, and Peanut Hamper, but it doesn't describe the Enterprise's computer. Which has speech recognition, but is less intelligent than an LLM.

[–] [email protected] 2 points 1 year ago (1 children)

I didn't say that everything in Star Trek was AGI, just that you can find examples there.

[–] [email protected] 2 points 1 year ago

I shall amend my comment to say clarification instead of correction.

[–] [email protected] 14 points 1 year ago (2 children)

"...AI" concerns me. I use quotation marks there because what is often referred to as AI today is not whatsoever what the term once described.

Lost me right there. Not only was and is this AI, but the term gets narrower over time, not broader. If you want to go by "what the term once described," you have to include computer vision, text to speech, optical character recognition, behavior trees for video game enemies, etc etc etc.

When I see people complain about calling LLMs "AI," I think the only definition that would satisfy them is "things computers can do that we aren't used to yet."

[–] [email protected] 2 points 1 year ago

Yeah, bruh is using the Halo definition of AI. Probably played too many video games instead of actually paying attention to the history of computing.

[–] [email protected] 11 points 1 year ago

I've seen so many bots on lemmy summarising the contents of websites and blocked all of them, because of this. They are not reliable, and I still caught myself reading those. I don't even want to know how many summaries which are in a post body are just generated by an LLM.

[–] [email protected] 8 points 1 year ago

I think it's less of an issue of LLMs being drunk and more that ostensibly sober people put them behind the wheel totally aware of how drunk they are while telling everyone that they're stone cold sober.

[–] [email protected] 4 points 1 year ago (1 children)

This is the reason I balk at personifiing these things with human terms. It sounds cool but it is both inaccurate and misleading especially in the hands of the media and the general public.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (1 children)

The dilemma is, ChatGPT can write better reports than most graduate school students in my country. For what these problematic vast majority of students do is remember, not analyze.

Specifically for this context, students are not trained to analyze what they are asked (input query for ChatGPT). When I ask a unique question in their assignment, they can't even form a response. They just write a generic text that doesn't try to answer my question.

They seem to copy and paste what's in their brain. And when it comes to copy and pasting, i.e. mimicking what people do, ChatGPT is the champion in some sense. Hell, OpenAI even tuned it to generate balanced stance, and that's also what students can't do.

Finally, 90% of the population perform actually worse than these graduate students.

[–] [email protected] 2 points 1 year ago

It is sad, but most people seem to go to school for certification not learning. Use to grade when in grad school... the lazy sloppy work was nuts. Working for a company... the terrible writing some people do even with advanced degrees.

[–] [email protected] 2 points 1 year ago

I for one think LLMs are more intelligent than an ant. The writer of this piece is using the movie definition of AI instead of the real world definition of AI.