this post was submitted on 07 Jul 2024
131 points (100.0% liked)

TechTakes

1769 readers
81 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 60 points 9 months ago (1 children)

At the same time, most participants felt the LLMs did not succeed as a creativity support tool, by producing bland and biased comedy tropes, akin to ``cruise ship comedy material from the 1950s, but a bit less racist''.

holy shit that’s a direct quote from the paper

[–] [email protected] 22 points 9 months ago

The phrasing "a bit less racist" suggests a nonzero level of racism in the output, yet the participants also complain about the censorship making the bot refuse to discuss sensitive topics. Sounds like these LLMs can only be boringly racist.

[–] [email protected] 38 points 9 months ago (2 children)

Spam machines are only ever funny or interesting by accident. The more they smooth out the wrinkles the more creatively useless they become. The tension is sort of fascinating.

Like I've always been interested in generative poetry and other manglings of text, and ChatGPT's so fucking dull compared to putting a sentence through babelfish a few times.

[–] [email protected] 6 points 9 months ago (1 children)

Honestly, I've gotten more laughs out of messing with markov chains with my friends than anything ChatGPT could put out

[–] [email protected] 6 points 9 months ago

GPT-2 was fun, because it was broken enough to be interesting and amusing.

[–] [email protected] 5 points 9 months ago* (last edited 9 months ago)

Before the big AI boom, I actually did a project where I used inferkit to generate text for the comedy factor because the unhinged nightmare garbage it spit out was extremely entertaining. I just can't imagine using chat gpt in the same way, it's so boring

[–] [email protected] 26 points 9 months ago (1 children)

oh, and these twenty comedians were using LLMs for writing already. They didn't want their names revealed, for some reason.

[–] [email protected] 22 points 9 months ago

The adverse impacts section was just the comedians saying “we’ve already lost friends, everyone hates us” but the conclusion was “here’s how comedians should use our tool.”

[–] [email protected] 25 points 9 months ago (1 children)

I can imagine a comedian using an LLM to check if a joke or punchline has been done before, but that would require the LLM to actually work and give accurate information. Also if you are a comedian using an LLM, you probably don’t actually care about whether or not you are plagiarising someone, so I guess this is all moot.

[–] [email protected] 31 points 9 months ago* (last edited 9 months ago) (1 children)

My favorite LLM move is when you ask for a source for their last response, and instead of saying they aren't capable of providing them, they just invent fictitious URLs.

[–] [email protected] 4 points 9 months ago* (last edited 9 months ago)
[–] [email protected] 16 points 9 months ago (1 children)
[–] [email protected] 12 points 9 months ago* (last edited 9 months ago)

this must be what they mean by woke AI

[–] [email protected] 6 points 9 months ago

He come out, Stu.

[–] Zos_Kia 4 points 9 months ago (6 children)

I've been experimenting on creative writing tools with a bunch of writer friends, and the setup described in this paper is notoriously shit. I mean they come up to ChatGPT on v3.5 (or Bard lmao) and expect it to write comedy ? Jeez talk about setting yourself up for failure. That's like walking up to a junior screenwriter and yelling "GIVE ME A JOKE" to them. I don't understand why people keep repeating that mistake, they design experiments where they expect the model to be the source of creativity but that's just stupid.

If you want to get output that is not entirely mediocre, you need something like a Dramatron architecture where you decouple various task (fleshing out characters, outlining at the episode level, outlining at the scene level, writing dialogues etc...) and maintain internal memory of what is being worked on. It is non-trivial to setup but it gets there sometimes - even the authors of this paper recognize that this would have probably produced better results. You also need a user able to provide good ideas that the model can work with, you can't expect the good creative stuff to come from the robot.

Instinctively i'd say you have to treat the model like your own junior writer, and how do you make a junior writer useful ? By teaching them to "yes, and" in a writing room with better writers (in this case, the user). In that context, with a good experienced user at the helm, it can definitely bring value. Nothing groundbreaking but i can see how a refined version of this could help, notably with consistency, story beats, pacing, the boring stuff. GPTs are better critics than they are writers anyway.

That being said i never really pursued "pure comedy" on LLMs as it sounds like a lost battle. In my mind it's kind of like tickling : if a machine pokes your ribs you don't get the tickles, that only works when a human does it. I doubt they can fix that in the short or mid term.

[–] [email protected] 15 points 9 months ago (1 children)

I don't understand why you're getting downvoted. You should read the room, delete your posts, and leave forever. Then you wouldn't be getting downvoted.

[–] Zos_Kia 2 points 9 months ago (1 children)

Am i getting downvoted ? It says 3 upvotes / 0 downvotes on my end.

[–] [email protected] 11 points 9 months ago (1 children)

Here, have another invisible downvote.

[–] [email protected] 10 points 9 months ago (1 children)

it had such strong "well all the people in my town don't seem to have a problem with me" energy

(I can see it happening if they ignore offsite downvotes on that lemmy, but yeah)

[–] [email protected] 10 points 9 months ago

Idk if downvotes don't federate at all or if it's homegrown jank, but I've never seen a downvote on another instance's post.

[–] [email protected] 15 points 9 months ago (1 children)

I want to point out that this interminable motherfucker introduced themselves as someone who supposedly does creative writing

load more comments (1 replies)
[–] [email protected] 14 points 9 months ago (3 children)

I like how you lose faith in your argument the longer your post goes on. Maybe start with the last sentence next time.

[–] [email protected] 13 points 9 months ago (1 children)

comment history also includes simulation hypothesis and some very eagleflavoured political analysis

I have a prediction!

[–] Zos_Kia 2 points 9 months ago (1 children)
[–] [email protected] 13 points 9 months ago (2 children)

you would think someone who experiments with creative writing "tools" might understand imagery, but when those "tools" are in fact just 3 GPTs in a trenchcoat, it's not surprising when they don't

[–] [email protected] 9 points 9 months ago

look, maybe the old adage "takes one to know one" could be disproven by this lack of recognition of tools. there might be a paper here!

[–] Zos_Kia 2 points 9 months ago

I legit don't get it. Is it about the US ? I mostly speak about France in my political comments so i'm not sure where they are going with that.

load more comments (2 replies)
[–] [email protected] 14 points 9 months ago (1 children)

That's a lot of words to say, "You're holding it wrong."

[–] Zos_Kia 2 points 9 months ago

More like "you're trying to paint with a hammer AND you're holding it wrong"

[–] [email protected] 13 points 9 months ago (19 children)

I mean they come up to ChatGPT on v3.5 (or Bard lmao) and expect it to write comedy?

Yeah, these things are supposed to be good at writing, aren't they?

load more comments (19 replies)
[–] [email protected] 12 points 9 months ago (1 children)

Hey, want some comedy advice? Read the room.

load more comments (1 replies)
load more comments
view more: next ›