this post was submitted on 29 Jun 2025
401 points (100.0% liked)

Not The Onion

17201 readers
1249 users here now

Welcome

We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!

The Rules

Posts must be:

  1. Links to news stories from...
  2. ...credible sources, with...
  3. ...their original headlines, that...
  4. ...would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”

Please also avoid duplicates.

Comments and post content must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.

And that’s basically it!

founded 2 years ago
MODERATORS
top 42 comments
sorted by: hot top controversial new old
[–] [email protected] 69 points 2 weeks ago (1 children)

CEOs are SO INTELLIGENT! I would NEVER have Thought to invest BILLIONS OF DOLLARS on Chatbots and Summarizers which ALREADY existed!

[–] [email protected] 6 points 2 weeks ago (3 children)

Trying not to be too douchey here, but ironically, your message is actually a very good example of where this technology could be beneficial.

IT is ACTUALLY not EASY to read a MESSAGE when THE CASE randomly SWITCHES back AND forth.

[–] [email protected] 3 points 1 week ago

im going to ask chatgpt to give you a swirlie

[–] [email protected] 3 points 1 week ago

It's not difficult lol. It's not particularly pleasant, sure, but perfectly comprehendable

[–] [email protected] 55 points 2 weeks ago
[–] [email protected] 25 points 2 weeks ago (1 children)
[–] [email protected] 16 points 2 weeks ago (1 children)
[–] [email protected] 7 points 2 weeks ago

😭♥️♥️

[–] [email protected] 23 points 2 weeks ago

"Caden, it looks like Airlynn just said you're a hopeless loser, and she's been banging your personal trainer Chad. Is there anything else I can help you with?"

[–] [email protected] 17 points 2 weeks ago

They're interjecting themselves between us and our contacts. They'll have the power to "summarize", which is also the power to subtly re-interpret meaning. The AI will be given a broad goal, and it will chip away at that goal bit by bit, across millions of summaries to mold public opinion.

[–] [email protected] 10 points 2 weeks ago

Tinder needs this function right now!

[–] [email protected] 9 points 2 weeks ago (2 children)

I don't use WhatsApp, but this immediately made me think of my dad who doesn't use any punctuation and frequently skips and misspells words. His messages are often very difficult to interpret, through no fault of his own (dyslexia).

Having an LLM do this for me would help both him and me.

He won't feel self conscious when I send a, "What you talkin' about Willis?" message, and I won't have to waste a ridiculous amount of time trying to figure out what he was trying to say.

[–] [email protected] 40 points 2 weeks ago (1 children)

If he’s not communicating in an explicit and clear way the AI can’t help you magically gain context. It will happily make up bullshit that sounds plausible though.

[–] [email protected] 3 points 2 weeks ago (2 children)

A poorly designed tool will do that, yes. An effective tool would do the same thing a person could do, except much quicker, and with greater success.

An LLM could be trained on the way a specific person communicates over time, and can be designed to complete a forensic breakdown of misspelt words e.g. reviewing the positioning of words with nearby letters in the keyboard, or identifying words that have different spellings but may be similar phonetically.

[–] [email protected] 13 points 2 weeks ago (1 children)

the same thing a person could do

asking for clarification seems like a reasonable thing to do in a conversation.

A tool is not about to do that because it would feel weird and creepy for it to just take over the conversation.

[–] [email protected] 1 points 2 weeks ago* (last edited 2 weeks ago)

The intent isn’t for the LLM to respond for you, it’s just to interpret a message and offer suggestions on what a message means or rewrite it to be clear (while still displaying the original).

[–] [email protected] 10 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

An LLM could be trained on the way a specific person communicates over time

Are there any companies doing anything similar to this? From what I've seen companies avoid this stuff like the plague, their LLMs are always frozen with no custom training. Training takes a lot of compute, but also has huge risks of the LLM going off the rails and saying bad things that could even get the company into trouble or get bad publicity. Also the disk space per customer, and loading times of individual models.

The only hope for your use case is that the LLM has a large enough context window to look at previous examples from your chat and use those for each request, but that isn't the same thing as training.

[–] [email protected] 1 points 2 weeks ago (1 children)

There are plenty of people and organisations doing stuff like this, there are plenty of examples on HuggingFace, though typically it's to get an LLM to communicate in a specific manner (e.g. this one trained on Lovecraft's works). People drastically overestimate the amount of compute time/resources training and running an LLM takes; do you think Microsoft could force their AI on every single Windows computer if it was as challenging as you imply? Also, you do not need to start from scratch. Get a model that's already robust and developed and fine tune it with additional training data, or for a hack job, just merge a LoRA into the base model.

The intent, by the way, isn't for the LLM to respond for you, it's just to interpret a message and offer suggestions on what a message means or rewrite it to be clear (while still displaying the original).

[–] [email protected] 3 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Huggingface isn't customer-facing, it's developer-facing. Letting customers retrain your LLM sounds like a bad idea for a company like Meta or Microsoft, it's too risky and could make them look bad. Retraining an LLM for Lovecraft is a totally different scale than retraining an LLM for hundreds of millions of individual customers.

do you think Microsoft could force their AI on every single Windows computer if it was as challenging as you imply?

It's a cloned image, not unique per computer

[–] [email protected] 1 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Hugging Face being developer-facing is completely irrelevant considering the question you asked was whether I was aware of any companies doing anything like this.

Your concern that companies like Meta and Microsoft are too scared to let users retrain their models is also irrelevant considering both of these companies have already released models so that anyone can retrain or checkpoint merge them i.e. Llama by Meta and Phi by Microsoft.

It’s a cloned image, not unique per computer

Microsoft's Copilot works off a base model, yes, but just an example that LLMs aren't as CPU intensive as made out to be. Further automated finetuning isn't out of the realm of possibility either and I fully expect Microsoft to do this in the future.

[–] [email protected] 3 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

Your concern that companies like Meta and Microsoft are too scared to let users retrain their models is also irrelevant considering both of these companies have already released models so that anyone can retrain or checkpoint merge them i.e. Llama by Meta and Phi by Microsoft.

they release them to developers, not automatically retrain them unsupervised in their actual products and put them in the faces of customers to share screenshots of the AI's failures on social media and give it a bad name

[–] [email protected] 1 points 2 weeks ago (1 children)

They release them under permissive licences so that anyone can do that.

[–] [email protected] 2 points 2 weeks ago* (last edited 2 weeks ago)

yea someone could take the model and make their own product with their own PR and public perception

that's very different from directly spoonfeeding it as a product to the general public consumers inside of WhatsApp or something

it's like saying someone can mod Skyrim to put nude characters in it, that's very different from Bethesda selling the game with nude characters

[–] [email protected] 1 points 2 weeks ago (1 children)

My friend works for a startup that does exactly that - trains AIs on conversations and responses from a specific person (some business higher-ups) for purposes of "coaching" and "mentoring". I don't know how well it works.

[–] [email protected] 1 points 2 weeks ago

it probably works pretty well when it's tested and verified instead of unsupervised

and for a small pool of people instead of hundreds of millions of users

[–] [email protected] 19 points 2 weeks ago

What makes you think the llm will be able to decipher something that already doesn't make sense

[–] [email protected] 9 points 2 weeks ago (1 children)

To be fair, my father tends to make messages quite incomprehensible by adding irrelevant information all over the place. Sometimes going on for multiple screens while it could easily have been a 2-3 sentence message.

Sadly I think AI would even be worse at picking up what information is important from that. But I understand why people want it.

As for very active groupchats, I am not gonna read +25 messages a day, but being able to glance the gist of it would be awesome.

[–] [email protected] 9 points 2 weeks ago (1 children)

The exact point at which the gist of it can be manipulated, leaving out context and nudging you toward a different opinion than you might have formed if you'd read the whole thread.

[–] [email protected] 1 points 1 week ago (1 children)

Friend, I think you need to reconsider your world perspective a bit. Not everyone is out to get you all the time.

[–] [email protected] 3 points 1 week ago (1 children)

To be fair, when Facebook was still big the privacy advocates were being branded as paranoid. Those turned out to be right after all.

[–] [email protected] 1 points 1 week ago

Even if the claim ends up being true, you can literally just read the messages that were sent and realize see that the summary was wrong.

[–] theotherbelow 6 points 2 weeks ago

I get it, big stupid money hungry tech wants to put itself between you and every other person on earth for the almighty dollar.

How about we use tech to fix problems not create new ones!?

[–] [email protected] 3 points 2 weeks ago

It does make sense in big groups with tons of irrelevant discussion but also few messages you actually need to read.

[–] [email protected] 3 points 2 weeks ago* (last edited 2 weeks ago) (2 children)

Ahh, intellect of the average American on display. So... -óóóh, is that a Donut?

[–] [email protected] 10 points 2 weeks ago* (last edited 2 weeks ago) (3 children)

People in the US don't use WhatsApp, for the most part.

[–] [email protected] 4 points 2 weeks ago

No, but they develop it

[–] [email protected] 2 points 2 weeks ago

You're right.

They don't use WhatsApp, they use Facebook Messenger.

[–] [email protected] 1 points 2 weeks ago (1 children)

Interesting! What do people in the US generally use?

[–] [email protected] 1 points 1 week ago

Sms/iMessage typically. I started using WhatsApp because my wife's from south America, where they commonly use it.

[–] [email protected] 5 points 2 weeks ago

Why are you making fun of topological toroidal surfaces mr smarty pants

[–] [email protected] 1 points 2 weeks ago

I have one specifically that is mostly bursts of 2 or 3 hours of chat between whoever is online in the group and with some worthwhile coordination messages mixed in at random. I don't want to read 80 messages about mortgage rates and VTI stocks to find a couple of lines I'm actually interested in about kid plans for the evening or something I'd actually care to talk about.