this post was submitted on 29 Jun 2025
401 points (100.0% liked)

Not The Onion

17185 readers
982 users here now

Welcome

We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!

The Rules

Posts must be:

  1. Links to news stories from...
  2. ...credible sources, with...
  3. ...their original headlines, that...
  4. ...would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”

Please also avoid duplicates.

Comments and post content must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.

And that’s basically it!

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 40 points 1 week ago (12 children)

If he’s not communicating in an explicit and clear way the AI can’t help you magically gain context. It will happily make up bullshit that sounds plausible though.

[–] [email protected] 3 points 1 week ago (11 children)

A poorly designed tool will do that, yes. An effective tool would do the same thing a person could do, except much quicker, and with greater success.

An LLM could be trained on the way a specific person communicates over time, and can be designed to complete a forensic breakdown of misspelt words e.g. reviewing the positioning of words with nearby letters in the keyboard, or identifying words that have different spellings but may be similar phonetically.

[–] [email protected] 10 points 1 week ago* (last edited 1 week ago) (8 children)

An LLM could be trained on the way a specific person communicates over time

Are there any companies doing anything similar to this? From what I've seen companies avoid this stuff like the plague, their LLMs are always frozen with no custom training. Training takes a lot of compute, but also has huge risks of the LLM going off the rails and saying bad things that could even get the company into trouble or get bad publicity. Also the disk space per customer, and loading times of individual models.

The only hope for your use case is that the LLM has a large enough context window to look at previous examples from your chat and use those for each request, but that isn't the same thing as training.

[–] [email protected] 1 points 1 week ago (1 children)

My friend works for a startup that does exactly that - trains AIs on conversations and responses from a specific person (some business higher-ups) for purposes of "coaching" and "mentoring". I don't know how well it works.

[–] [email protected] 1 points 1 week ago

it probably works pretty well when it's tested and verified instead of unsupervised

and for a small pool of people instead of hundreds of millions of users

load more comments (6 replies)
load more comments (8 replies)
load more comments (8 replies)