this post was submitted on 09 Apr 2024
185 points (100.0% liked)

Technology

69947 readers
2137 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 66 points 1 year ago (6 children)

I've actually started to recognize the pattern of if something is written in AI

It's hard to describe but it's like an uncanny valley of quality, like if someone uses flowery SAT words to juje up their paper's word count but somehow even more

It's like the writing will occasionally pause to comment on itself and the dramatic effect its trying to achieve

[–] [email protected] 19 points 1 year ago

Yeah it's called bullshitting. It's the way lots of people are encouraged to write in high school when the goal is to see if the student can write a large amount of prose with minimal grammatical errors.

But once you get to post-secondary you are expected for your writing to actually have content and be fairly concise in expressing that content. And AI falls on its face trying to do that.

[–] [email protected] 19 points 1 year ago (1 children)

Yeah, this is true! It likes to summarize things at the end in a stereotypical format

[–] [email protected] 4 points 1 year ago (1 children)

It's not a bad format either, AI seem to enjoy the five paragraph assay format above all other even for casual conversations.

load more comments (1 replies)
[–] [email protected] 15 points 1 year ago (1 children)

The LLM isn't really thinking, it is auto complete trained so the average person would be fooled thinking that text was produced by another human.

I'm not surprised it has flaws like that.

BTW here on Lenny there are communities with AI pictures. Someone created a similar community but with art created by humans.

While the AI results are very good, when you start looking and comparing it with non AI art, you start seeing that the AI while it is unique it still produces a cookie cutter results.

load more comments (1 replies)
[–] [email protected] 7 points 1 year ago (1 children)

I have issue with using AI to write my resume. I just want it to clean up my grammar and maybe rephrase a few things just in a different way I wouldn't because I don't do the words real good. But I always end up with something that reads like I paid some influencer manager to write it. I write 90% of it myself so its all accurate and doesn't have AI errors. But it's just so obviously too good.

[–] [email protected] 9 points 1 year ago

You are putting yourself down unnecessarily. You want your resume to talk you up. Whoever reads it is going to imagine that you embellished anyway. So if you just write it basically, they'll think you're unqualified or just don't understand how to write a resume.

[–] [email protected] 4 points 1 year ago (4 children)

Writing papers is archaic and needs to go. College education needs to move with the times. Useful in doctorate work but everything below it can be skipped.

[–] [email protected] 21 points 1 year ago

Learning to write is how a person begins to organize their thoughts, be persuasive, and evaluate conflicting sources.

It's maybe the most important thing someone can learn.

[–] [email protected] 11 points 1 year ago (1 children)

The trouble is that if it's skipped at lower levels doctorate students won't know how to do it anymore.

[–] [email protected] 7 points 1 year ago

Are they going to know how to do it now if they're all just Chat GPTing it?

Clearly we need some alternative mode to demonstrate mastery of subject matter, I've seen some folks suggesting we go back to pen and paper writing but part of me wonders if the right approach is to lean in and start teaching what they should be querying and how to check the output for correctness, but honestly that still necessitates being able to check if someone's handing in something they worked on themself at all or if they just had something spit out their work for them.

My mind goes to the oral defense, have students answer questions about what they've submitted to see if they've familiarized themselves with the subject matter prior to cooking up what they submitted, but that feels too unfair to students with stage anxiety, even if you limit these kinds of papers to only once a year per class or something. Maybe something more like an interview with accomodation for socially panickable students?

[–] [email protected] 9 points 1 year ago* (last edited 1 year ago) (1 children)

I'm in software engineering. One would think that English would be a useless class for my major, yet at work I still have to write a lot of documents. Either preparing new features, explaining existing, writing instructions for others etc

BTW: with using AI to write essays, you generally have subject that is known and that many people write something similar, all of that was used to train it.

With technical writing you are generally describe something that is brand new and very unique so you won't be able to make AI write it for you.

[–] [email protected] 6 points 1 year ago

When I come across a solid dev who is also a solid writer it's like they have super powers. Being about to write effectively is so important.

[–] [email protected] 4 points 1 year ago

You can't have kids go through school never writing papers and then get to graduate school and expected to churn out long, well written papers.

load more comments (1 replies)
[–] ReallyActuallyFrankenstein 47 points 1 year ago (3 children)

I've started getting AI-written emails at my job. I can spot them within the first sentence, they don't move the discussion forward at all, and I just have to write another email giving them the courtesy they didn't give me and explain why what they "wrote" doesn't help.

Can someone tell me, am I a boomer for being offended any time someone sends me AI-written garbage? Is this how the generations will split?

[–] [email protected] 26 points 1 year ago (1 children)

Lesson I've learned - email is for tracking/confirmation/updates/distributing info, not for decision making/discussions. Do that on the phone/meetings, etc, followup with confirmation emails.

So when someone sends a nonsense email, call them to clarify. They'll eventually get tired of you calling every time they send their crappy emails.

[–] [email protected] 27 points 1 year ago (2 children)

I disagree about the purpose of email. I end most meetings thinking to myself, "That last hour could have been accomplished in a brief email."

[–] [email protected] 13 points 1 year ago (1 children)

I think you're both right. A lot of meetings are one person talking and the others listening, that could have been an email. Actual back-and-forth discussion needs to be verbal though, otherwise what could be resolved in 10 minutes takes a week.

load more comments (1 replies)
load more comments (1 replies)
[–] [email protected] 10 points 1 year ago* (last edited 1 year ago) (1 children)

am I a boomer for being offended any time someone sends me AI-written garbage?

Yes.

But also — why are you doing them any courtesies? Clearly the other person hasn't spent any time on the email they sent you. Don't waste time with a response - just archive the email and move on with your life.

Large Language Models are extremely powerful tools that can be used to enhance almost anything - including garbage but it can also enhance quality work. My advice is don't waste your time with people producing garbage, but be open and willing to work with anyone who uses AI to help them write quality content.

For example if someone doesn't speak english as a first language, an LLM can really help them out by highlighting grammatical errors or unclear sentences. You should encourage people to use AI for things like that.

[–] ReallyActuallyFrankenstein 23 points 1 year ago (1 children)

But also — why are you doing them any courtesies? Clearly the other person hasn't spent any time on the email they sent you. Don't waste time with a response - just archive the email and move on with your life.

That'd be nice! But that's not how it works. I can't just ignore a response. The project still needs to move forward, but if they've successfully mimicked a "response" - even an unhelpful once - it's now my duty to respond or I'm the one holding things up.

I'm sure someone out there is using them in a way that helps, but I haven't seen it yet in the wild.

[–] [email protected] 13 points 1 year ago (1 children)

I'm sure someone out there is using them in a way that helps, but I haven't seen it yet in the wild.

That's because those responses are indistinguishable from individually written ones. I know people who use chatGPT or other LLMs to help them write things, but it takes the same amount of time. You just have more time to improve it, so it's better quality than you would write alone.

The key is that you have to use your brain more to pick and choose what to say. It's just like predictive text, but for whole paragraphs. Would you write a text message just by clicking on the center word on your predictive text keyboard? It would end up nonsensical.

[–] ReallyActuallyFrankenstein 6 points 1 year ago (6 children)

I believe that in theory. But I've tried Mixtral and Copilot (I believe based on ChatGPT) on some test items (e.g., "respond to this..." and "write an email listing this..." type queries) and maybe it's unique to my job, but what it spits out would take more work to revise than it would take to write from scratch to get to the same quality level.

It's better than the bottom 20% of communicators, but most professionals are above that threshold, so the drop in quality is very apparent. Maybe we're talking about different sample sets.

load more comments (6 replies)
load more comments (1 replies)
[–] [email protected] 38 points 1 year ago* (last edited 1 year ago) (7 children)

Unexpected pencil and paper test comeback

[–] [email protected] 13 points 1 year ago* (last edited 1 year ago)

Already happening. My kid in high school has more tests and papers required to be hand-written this year.

And yes, TurnItIn legitimately caught him writing a paper with AI. Even the best kids make the stupid/lazy choice

[–] [email protected] 11 points 1 year ago

When I was in college (2000-2004), we wrote our long papers on computers but we had what were called “blue books” for tests that were like mini notebooks. And many of the tests were basically, “Here is the topic. Write for up to an hour.”

And now my hand cramps if I write anything longer than a check. I can also type quickly enough that it basically matches the speed of my train of thoughts but actually writing cursive with a pen now, I get distracted and think, “Wait, how does a cursive capital ‘G’ go? Oh yeah. Hold on. What was I going to write?”

I pity the kids that have always typed for what their hands will go through on written tests

load more comments (5 replies)
[–] [email protected] 36 points 1 year ago (10 children)

Machine learning tool used by people too lazy to do their actual job accuses everyone else of using machine learning tools.

[–] [email protected] 9 points 1 year ago

Yeah that's pretty funny given the circumstances. "Our AI found your AI." Cool, so maybe none of this is working as intended. I'd be willing to bet nothing changes but the punishments for students.

load more comments (9 replies)
[–] [email protected] 33 points 1 year ago (3 children)

I have it write all my emails. I’m so productive and everyone loves them. That or they’re also using ChatGPT, and it’s just two computers flattering each other.

[–] [email protected] 21 points 1 year ago

I had it write an operation manual for a client I particularly hate. Told it to make it sound condescending by dumbing it down just to the point where I could deny it. The first few times it just sounded like a 5th grade teacher talking to a kid while in a bad mood, but eventually it figured out if it just repeated itself enough it got the effect I wanted.

Things like: user is to disconnect power before attempting to repair. It is vital that the step of disconnecting power before attempting to repair is carried out.

load more comments (2 replies)
[–] [email protected] 21 points 1 year ago (2 children)

Someone posted to the class discussion form with the bit about being an ai bot still included.

I wish it was a joke.

I didn't do great in that class, but it was me getting 70% for not wanting to try and explain a mathematically concept in 500 words! They won't take that away from me.

[–] [email protected] 9 points 1 year ago (1 children)

I still have issues with such restrictions. I mean, why 500 words if you can explain it in 100?

[–] [email protected] 5 points 1 year ago* (last edited 1 year ago) (1 children)

To force elaboration while staying on point. Details are just as important to writing as conciseness.

[–] [email protected] 4 points 1 year ago

Then give marks for elaboration instead.

[–] [email protected] 8 points 1 year ago (1 children)

I had a student write me a chatgpt canned answer, prompt included.

[–] [email protected] 6 points 1 year ago

That's a good one. I once gave an assignment for students to write an original poem. One student submitted The Charge of the Light Brigade by Tennyson and claimed it was his own. These were middle school kids so he didn't realize how famous the poem is. This shit has been happening forever. LLMs are another phase in the never-ending arms race between teachers and students who want to cheat.

[–] [email protected] 14 points 1 year ago (1 children)

And nothing of value was produced.

[–] [email protected] 9 points 1 year ago

To be fair- that value didn't change much from pre ai.

[–] [email protected] 12 points 1 year ago (1 children)

Utterly unsurprising, given that very few students are actually interested in learning.

load more comments (1 replies)
[–] [email protected] 10 points 1 year ago
[–] [email protected] 6 points 1 year ago

And those papers get used as training data for next iteration of AI. Reinforcement learning!

[–] [email protected] 5 points 1 year ago

Students? Even teachers are doing it...

[–] [email protected] 5 points 1 year ago

Good. Academia lost its way anyways

load more comments
view more: next ›