this post was submitted on 24 May 2025
17 points (100.0% liked)

Perchance - Create a Random Text Generator

763 readers
9 users here now

⚄︎ Perchance

This is a Lemmy Community for perchance.org, a platform for sharing and creating random text generators.

Feel free to ask for help, share your generators, and start friendly discussions at your leisure :)

This community is mainly for discussions between those who are building generators. For discussions about using generators, especially the popular AI ones, the community-led Casual Perchance forum is likely a more appropriate venue.

See this post for the Complete Guide to Posting Here on the Community!

Rules

1. Please follow the Lemmy.World instance rules.

2. Be kind and friendly.

  • Please be kind to others on this community (and also in general), and remember that for many people Perchance is their first experience with coding. We have members for whom English is not their first language, so please be take that into account too :)

3. Be thankful to those who try to help you.

  • If you ask a question and someone has made a effort to help you out, please remember to be thankful! Even if they don't manage to help you solve your problem - remember that they're spending time out of their day to try to help a stranger :)

4. Only post about stuff related to perchance.

  • Please only post about perchance related stuff like generators on it, bugs, and the site.

5. Refrain from requesting Prompts for the AI Tools.

  • We would like to ask to refrain from posting here needing help specifically with prompting/achieving certain results with the AI plugins (text-to-image-plugin and ai-text-plugin) e.g. "What is the good prompt for X?", "How to achieve X with Y generator?"
  • See Perchance AI FAQ for FAQ about the AI tools.
  • You can ask for help with prompting at the 'sister' community Casual Perchance, which is for more casual discussions.
  • We will still be helping/answering questions about the plugins as long as it is related to building generators with them.

6. Search through the Community Before Posting.

  • Please Search through the Community Posts here (and on Reddit) before posting to see if what you will post has similar post/already been posted.

founded 2 years ago
MODERATORS
 

Hey people of Perchance and to whoever developed this generator,

I know people keep saying, “The new model is better, just move on,” but I need to say something clearly and honestly: I loved the old model.

The old model was consistent.

If I described a character — like a guy in a blue jumper, red jeans, and purple hair — the old model actually gave me that. It might sound ridiculous, but at least I could trust it to follow the prompt. When I used things like double brackets ((like this)), the model respected my input.

And when I asked for 200 images, the results looked like the same character across the whole batch. It was amazing for making characters, building stories, and exploring different poses or angles. The style was consistent. That mattered to me. That was freedom.

Now with the new model, I try to recreate those characters I used to love and they just don’t look right anymore. The prompts don’t land. The consistency is gone. The faces change, the outfits get altered, and it often feels like the model is doing its own thing no matter what I ask.

I get that the new model might be more advanced technically — smoother lines, better faces, fewer mistakes. But better in one way doesn’t mean better for everyone. Especially not for those of us who care about creative control and character accuracy. Sometimes the older tool fits the job better.

That’s why I’m asking for one thing, and I know I’m not alone here:

Let us choose. Bring back the old model or give us the option to toggle between the old and the new. Keep both. Don’t just replace something people loved.

I’ve seen a lot of people online saying the same thing. People who make comics, visual novels, storyboards, or just love creating characters — we lost something when the old model was removed. The new one might look nice, but it doesn’t offer the same creative control.

This isn’t about resisting change. This is about preserving what worked and giving users a real choice. You made a powerful tool. Let us keep using it the way we loved.

Thanks for reading this. I say it with full respect. Please bring the old model back — or at least give us a way to use it again.

please

all 46 comments
sorted by: hot top controversial new old
[–] [email protected] 4 points 6 days ago (1 children)

I don't know a whole lot about AI, but when I first came across this site I thought it was the best site I ever came across. I started making as many images as I could for hours at a time because I knew it was too good to be true. And I was right (as far as my taste). I absolutely hate the new model.

[–] [email protected] 1 points 4 days ago
[–] [email protected] 7 points 1 week ago* (last edited 1 week ago)

Just a frustrated opinion. I agree that I would pay for the old version. This thing went from fantastic to absolutely awful.

[–] [email protected] 6 points 1 week ago (1 children)

The way the devs blatantly replaced the old model overnight, like it was worthless or something, shows how little they care about their own community

[–] [email protected] 3 points 1 week ago (1 children)

tbh that's both right and wrong. The dev was doing the update for months and we did learn some hints, but not much.

[–] [email protected] 5 points 1 week ago (1 children)

Yeah, but this was not some bugfix or minor change, was it? They replaced the 1.5 years old core model itself; don't you think they should had made any announcements, like where they're showing NOW 'Notes on new model', it could have been 'Notes on upcoming new model'

[–] [email protected] 1 points 1 week ago

You are right, but it's always like this, just you didn't notice. Again you are right, but it's not first time.

[–] [email protected] 5 points 1 week ago (1 children)

I'm guessing it's cost money to make these changes BUT if I had a website that users absolutely loved then changed it and "most" users hated those changes,i'd change it back in an instant..."the customer is always right" (D-Fens).

[–] [email protected] 2 points 1 week ago

You're not a customer. You're a user of a free service provided as part of a larger service.

[–] [email protected] 5 points 1 week ago

Have to agree with you...i'd happily pay for the old version...I used to come on here every day to create...now I only come on to read the comments...alas people very rarely (if ever) listen to the "customer",they just plough on regardless afraid of admitting to a mistake.

[–] [email protected] 4 points 1 week ago* (last edited 1 week ago)

This is the very last image I created with the old model. Good-ole-classy Perchance

[–] [email protected] 4 points 1 week ago (1 children)

Sorry if I’m joining the discussion. I’ve read both the thesis and the replies carefully, all very reasonable. Nevertheless, I have the impression that the topic is being circled around without getting straight to the point. I ask those who rightly defend the wonders and possibilities offered by Flux whether this model is capable of reproducing even a single image resembling the one I posted. The character is the secretary of the detective I invented in my noir stories. If Flux cannot generate the face of this character — an ordinary face, a long nose with a slight bump, etc. — then we’ve explained what those who (rightly) say they’ve lost control over their characters actually mean. Flux may be incredible, but if it can’t create a face like this one, for me and for many others it will mean having to give up on the characters we've created. So it’s not a matter of adapting to the new model — the real question is: what can this model do, and what can’t it do? Obviously, I’m waiting to see proof that Flux can, in fact, give us our characters back. Thank you very much

[–] [email protected] 2 points 1 week ago* (last edited 1 week ago) (2 children)

Calm down, you're rushing and probably don't know how to use the tool. Here's how I did it:

  1. I converted your image into a prompt using the website: https://imageprompt.org/image-to-prompt
  2. I took the prompt and put it in perchange;
  3. I defined the casual style;
  4. I chose an image to copy the new prompt and seed;
  5. I generated variations of the same seed, slightly changing the prompt;

Will it be perfect? Of course not. The images would still need to go to the "workbench" as has always been done. But the quality is really much better with the new model. You can generate several times until you find the facial features closest to what you want. You might spend about 20 minutes generating images repeatedly, but once you find the seed, you can manipulate it however you want.

Prompt + seed: A casual photo of Close-up portrait of a woman with long, dark hair. Latina, 25-35 years old. She is wearing a cream-colored, lace-trimmed top. The top has a sweetheart neckline that accentuates her upper body. A silver chain with a dark, rectangular pendant hangs around her neck. Her expression is serious and focused, with direct eye contact. Her gaze is slightly backwards, suggesting contemplation or introspection. Her lips are smiling, and her facial features are softly lit, highlighting her natural beauty. Soft, natural lighting creates subtle shadows and highlights across her face and upper body. The background is a dark, muted brown tone, providing a stark contrast to her light-colored top. The image's perspective is directly in front of the subject, creating a feeling of intimacy and personal connection. The composition is centered, focusing the viewer's attention solely on her. The overall mood is introspective and thoughtful, with a classic realism style.. It's a casual photo. (seed:::172525303)

[–] [email protected] 2 points 1 week ago (1 children)

Calm down, you’re rushing and probably don’t know how to use the tool. Here’s how I did it:

Thanks, it's a valuable suggestion and I'll try to use the tool you linked, but as you can see the "problem" remains. But I would still like to make a premise that I didn't do before, I'm enthusiastic about certain aspects of Flux, the level of detail, the general quality, the precision, these are important things, you lose the wonder of the AI ​​that seems to "interpret" your command and other fascinating and artistic things about Stable Diffusion, but on some things it is incredible, and yet as you can see although what you showed me is much better than what I have ever managed to do up to this point, not even remotely did Flux manage to reproduce the protruding nose of the character, by latina woman he means a Mexican apparently and not an Italian, or a Jewish for example. I don't know how to use the tool and you are absolutely right about that, I'm looking for discussions on the StableDiffusion subreddit and around the net, I ask ChatGPT to make me some promtps that never work, I'm trying, but I have every doubt in the world that I will be able to find a seed that matches my needs and once again if I tell Flux make me an aquiline nose, with a hump, Flux can't do it.

[–] [email protected] 2 points 1 week ago (1 children)

In this case, you have 3 options:

  1. Without using seeds, generate batches of images several times with the same prompt until you find the face you are looking for;
  2. Change the prompt slightly to include "mixture of Mexican and Italian" or something like that; or
  3. put the name of a famous person in the prompt.

I always try to choose option 2. Both the current and previous models always gave my characters a unique touch. I'm still learning a lot, and this new model has also brought me frustration. However, for those who like to find patterns to write prompts, the previous model was extremely bad and unpredictable. The current model has greatly reduced this problem and it was then easier to understand the AI's behavior.

[–] [email protected] 2 points 1 week ago

Thank you very much, I'm trying and to be honest I'm getting much better results thanks to your advice. Hopefully things will improve in the future, as spam lovers who find it unnatural to have civil conversations to delve into topics they don't know kindly remind us. :)

[–] [email protected] 1 points 1 week ago

Feel free to try Joycaption for captioning : https://lemmy.world/post/30096816

[–] [email protected] 4 points 1 week ago (1 children)

The old model was consistent.

If I described a character — like a guy in a blue jumper, red jeans, and purple hair — the old model actually gave me that. It might sound ridiculous, but at least I could trust it to follow the prompt.

Prompt Result

When I used things like double brackets ((like this)), the model respected my input.

Well, that was a syntax from SD, while the new model is Flux. It requires different prompting; it doesn't accept the same syntax, from what people tested. Some have had success reinforcing desired aspects with more adjectives, or even repeating specific parts of the prompt.

Now with the new model, I try to recreate those characters I used to love and they just don’t look right anymore. The prompts don’t land. The consistency is gone. The faces change, the outfits get altered, and it often feels like the model is doing its own thing no matter what I ask.

As I explained in another thread, you can use the seed system to preserve the some details of the image while changing others: https://lemmy.world/post/30084425/17214873

With a seed, notice the pose and general details remain. One of them had glasses on, while others were clean shaven. But the prompt wasn't very descriptive on the face.

Seed1

If I keep the same seed, but change a detail in the prompt, it preserves a lot of what was there before:

a guy in a blue jumper, red jeans, and purple hair, he is wearing dark sunglasses (seed:::1067698885)

Seed2

Even then, the result will try to be what you describe. You can be as detailed as you want with the face. On that thread I showed that you can still get similar faces if you describe them.

Let us choose. Bring back the old model or give us the option to toggle between the old and the new. Keep both. Don’t just replace something people loved.

Keeping two models hosted at once would very likely involve additional costs. While it might be possible, it seems unlikely due to this reason.

I’ve seen a lot of people online saying the same thing. People who make comics, visual novels, storyboards, or just love creating characters — we lost something when the old model was removed. The new one might look nice, but it doesn’t offer the same creative control.

On the discord server, I've seen people create all of these. A lot of it is a matter of prompting. People on the discord are very helpful and quite active at experimenting styles, seeds, prompts, and I've had a lot of help with getting good results there.

With the new model, everyone started on the same footing. We don't know the new best practices on the prompting, but people are experimenting, and many have managed to recreate images they made before.

[–] [email protected] 5 points 1 week ago (2 children)

I understand what you're saying, but that’s not the point. Let me explain properly.

Yes, if I write something like “a guy in a blue jumper, red jeans, and purple hair wearing dark sunglasses,” I get that the new model will try to follow that. That’s not the issue.

The issue isn’t about what the prompt says — it’s about how the characters come out.

With the old model, when I created characters using the same prompt across multiple generations, I got images that looked like the same character every time — same face, same style, same feeling, with only small variations. That’s what I loved. That consistency mattered. I could trust it. It made character creation easy, fun, and powerful for storytelling.

Now with the new model, I use the exact same prompts, same settings, and even the same seed structure — and yet the results look completely different. The style shifts, the faces change, and it feels like I’m getting a new person each time. Even the framing is inconsistent — for example, the old model would show the full torso, while the new one sometimes crops too close, like it’s focusing only on the top half.

Sure, I’ll admit: the new model is prettier. It’s technically cleaner, with sharper rendering and fewer artifacts. But that doesn’t mean it’s better for everyone. For me, the old model’s simplicity and reliability made it far more useful.

I’m not saying throw out the new model. I’m saying: give us the option to choose. Let those of us who found value in the old system keep using what worked for us.

This isn’t about resisting change. It’s about not losing something that genuinely helped creative people get consistent, dependable results — especially for things like comics, visual novels, or animation projects.

Please don’t dismiss this as just a prompting issue. It’s a model behavior issue. And I really hope the devs take this feedback seriously.

[–] [email protected] 4 points 1 week ago (1 children)

With the old model, when I created characters using the same prompt across multiple generations, I got images that looked like the same character every time — same face, same style, same feeling, with only small variations. That’s what I loved. That consistency mattered. I could trust it. It made character creation easy, fun, and powerful for storytelling.

Now with the new model, I use the exact same prompts, same settings, and even the same seed structure — and yet the results look completely different. The style shifts, the faces change, and it feels like I’m getting a new person each time. Even the framing is inconsistent — for example, the old model would show the full torso, while the new one sometimes crops too close, like it’s focusing only on the top half.

Please demonstrate this. What are the prompts and seeds you are using here? What results you were expecting? What results you got? I posted examples previously.

I’m not saying throw out the new model. I’m saying: give us the option to choose. Let those of us who found value in the old system keep using what worked for us.

I answered this before. To make this request more likely, you need to show that what you got before or what you want isn't reasonably achievable with the new model.

Please don’t dismiss this as just a prompting issue. It’s a model behavior issue. And I really hope the devs take this feedback seriously.

For this to be taken as a model behavior issue, you need to provide information. What are the prompts, seeds, results you are getting? You are only talking in abstract terms here. Please provide some actual examples here.

[–] [email protected] 2 points 1 week ago (1 children)

Let me explain where I’m coming from.

When it comes to the old model, I liked the anime style it gave. Not just the general "anime" look — I mean that clean, consistent, almost retro-modern feel it had. Yeah, the new model still looks anime, but it’s way more detailed and painterly. That’s not bad — it’s actually gorgeous — but it doesn’t fit the style I’ve been using for a long time to make my characters.

Here’s the two big problems:

  1. The new style doesn’t fit my flow. It’s like if you were animating a whole show in the Kill la Kill style and suddenly halfway through someone said,

“Let’s switch to Fate/Zero style now.” Sure, both are anime. But they are totally different in tone, shading, energy, and presentation. You just don’t do that mid-project. That’s what the shift to the new model feels like — jarring.

  1. The consistency is gone. With the old model, I could generate 200 images, and while they weren’t identical, they were consistent enough that I could go,

“Hmm... not quite... not quite... ooh, that one’s perfect.” Each one felt like a variant of the same person, and that made it easy and fun to find the right frame, pose, or mood.

But with the new model? Forget it. Every image feels like a completely different character. It’s like I’m suddenly in a different anime entirely. That makes it impossible to build a scene, comic, or reference set like I used to.

So yeah — I’m not bashing the new model. It’s beautiful. But it’s like being forced to paint with oil when I just want to use clean inks. All I’m asking is: Give us the option to choose the model that fits the style we built everything around.

That’s all.

[–] [email protected] 2 points 1 week ago

Took me a bit to reply to this. Anyway, if you're not willing to show examples of what you're trying to achieve, there's nothing to see here. You are just being abstract and that doesn't help proving to anybody that what you want is not achievable on this model.

I have already shown you examples of how to use seeds to achieve consistency, and yet we don't know anything about what you're trying. Not much to see here as constructive criticism if you're not providing examples of what you tried.

[–] [email protected] 3 points 1 week ago

Running two models for free will be heavy financial burden for devs i guess.

[–] [email protected] 3 points 1 week ago (3 children)

True. I used to use FUrry-oil to create consistent characters for my dnd game. And by consistent, I mean style. I have included examples:

These guys look like they were painted and drawn by the same artist. I use the same prompts now and, well, some of them look like cartoons! That's not consistent! At the very least, tell me what artist stuff you had the old model use for reference, cause the new ones are no bueno, my friend!

[–] [email protected] 2 points 1 week ago (1 children)

Now they are glitching..... Somehow I felt an improvement today, I got "slightly" increase in chance of usable results

[–] [email protected] 1 points 1 week ago

if this is the first improvement, by my calculations in one month over 500% of images will be keepable

[–] [email protected] 2 points 1 week ago* (last edited 1 week ago)

What’s been working for me is manually setting the style in the prompt and selecting "No style" under Art Style. I still need to experiment a bit more to get better control, but it feels like I’m on the right track. One thing I noticed right away is that this new model produces way less “garbage” than the previous one. Before, I had to generate a bunch of image batches just to find one I actually liked.

[–] [email protected] 1 points 1 week ago* (last edited 1 week ago)

Furry - Oil to me seemed like it was almost exclusively trained on Chunie (aka Hun). It would even constantly try to dump their signature into a low corner just like your examples show, lol.

https://www.furaffinity.net/user/hun/

It was the odd one, because the other three furry filters clearly used the same LoRa (no idea what that one's background was), and only changed a bunch of hidden prompts to give the images a corresponding look. But then again Oil was also a late addition.

[–] [email protected] 3 points 1 week ago (1 children)

Wouldn't it be possible to simply have a button that would allow you to switch from one model to another? From the new to the old?

[–] [email protected] 2 points 1 week ago (2 children)

Keeping two models hosted at once would very likely involve additional costs. While it might be possible, it seems unlikely due to this reason.

[–] [email protected] 1 points 6 days ago (1 children)

ok but I don't understand, according to the following link, it is explained that prettyAi already uses several models: "As for now, we know that Perchance is using different models depending on the prompt of the user (specifically 'Furry', 'Photorealistic', and 'Anime' images)" https://rentry.org/perchance-ai-faq#%3A%7E%3Atext=*+The+%60text%2Cspecifically+%27Furry%27%2C+%27Photorealistic So, what to understand?

[–] [email protected] 1 points 5 days ago

That rentry looks unofficial and old (not updated to the new version). On the image generator frontend, Styles are simply extra words added to your prompt.

While there is a link to a post where the dev confirmed that the old model used more than one model (until the base model was updated), I don't think there is any information if the new model (Flux) behaves this way. There is also no detail about the cost, if those previous models could be merged into a single one, or if the Flux model is more expensive than the others combined.

[–] [email protected] 1 points 6 days ago

In the meantime how many who loved the old version are going to give up on this new version - the incredibly slow waiting speeds for images to generate & over complicated way of using this version (compared to the simplicity of the previous version) are very off putting...am I better off coming back in 6 months (or longer)...?

[–] [email protected] 3 points 1 week ago (1 children)

At first, I was pretty upset about the AI model change. I had already set a workflow for my project... But honestly, this new model does seem better. I’ll keep learning so I can get as close as possible to what I had before. I’m sure in a few months things will feel normal again and everyone will get the hang of this new tool.

[–] [email protected] 2 points 1 week ago

this this this is what im talking about

[–] [email protected] 3 points 1 week ago* (last edited 1 week ago) (1 children)

It likely isn't the model itself. They probably updated the software running the model, likely llama.cpp. There are code breaking changes in this software. The biggest change is likely in how the softmax settings were altered to add better repetition avoidance stuff. This reduces the total tokens and some patterns that emerge over longer contexts.

If you have access to some of the softmax settings like temperature and token probabilities try changing these until you find something better that you like again. These settings often need to be drastically altered to get back to something you can tolerate, but the patterns will never be exactly the same.

In my experience, the newer versions of llama.cpp are much more sensitive to the first few tokens. They also do not resume old conversations in the same way due to how stuff is cached. Try building stuff from scratch every time. Alternatively, you need to set the starting seed to get similar results every time regardless of the seed per generative query.

[–] [email protected] 4 points 1 week ago (2 children)

Appreciate the technical insight — I think you’re half right, but still missing the core issue.

Yeah, I get that it might not just be the model itself — changes in things like llama.cpp, token handling, softmax behavior, and temperature tuning could totally affect how the model generates images or text. I'm not saying you’re wrong on that.

But even with tweaking — temperature, repetition penalties, seed control, all of that — what I’m saying is that the feel and functionality of the old model is still missing. Even with the same prompt and same seed, the new system doesn’t give me the same results in terms of styling, framing, and consistency across batches. It's like asking for a toolbox and getting a magic wand — powerful, but unpredictable.

I’m not trying to get exact copies of old patterns — I just want the same level of control and stability I had before. I’ve already tried building from scratch, resetting seed behavior, prompt front-loading, etc. It still doesn’t replicate the experience the old model gave me.

So again — I’m not dismissing the technical updates. But for people like me who rely on visual consistency for characters across dozens of images, the user-facing behavior changed in a way that broke that workflow. That’s what I’m asking to have restored — whether through old model access or a toggle that emulates the old behavior.

[–] [email protected] 2 points 1 week ago (2 children)

Nah, you are absolutely wrong. The new model is hundreth time better than previous one . It just needs training on more data. Or they can replace it with sdxl . I wouldn't mind paying for sdxl but the previous sd1.5 was very bad . I would agree if you say some art styles were masterpiece but we can get such art styles in this model too after the training finishes so be patient.

[–] [email protected] 4 points 1 week ago (1 children)

I hear you — but we’re talking about different goals here.

You’re focusing on raw output quality, and I get that. Yes, the new model (like Flux or SDXL) does look cleaner, more polished, and overall more modern. If your goal is one-off images or artistic flair, I totally understand preferring it.

But for people like me — who use these models to create consistent characters across batches for things like comics, visual novels, or storyboarding — the older model had a huge advantage: it stayed consistent.

It wasn’t about the exact prompt. It was about how the results felt connected, like they were from the same world, same artist, same character — with minor differences, not total redesigns every time.

Right now, I’m using the same prompt and seed structure I used before, and I’m getting characters that vary a lot — even with careful tuning. That’s the core of what I’m missing.

Also, saying “wait for training” is fine, but why should we have to wait at all when we already had something that worked? Why not offer both options — the new polished one and the old consistent one?

So no hard feelings, but I’m not “absolutely wrong” just because our use cases are different. I’m just asking for a choice, not a replacement.

[–] [email protected] 2 points 1 week ago* (last edited 1 week ago) (1 children)

How do you expect both to be hosted? That would be double the resource usage, double the cost. Not to mention, the actual training (which is still running and will continue for the next month or so) will dramatically improve results and consistency. It simply doesn't have all it's data yet. Until then, it's going to have some issues, but it will get much better.

[–] [email protected] 1 points 1 week ago

wishful thinking

[–] [email protected] 2 points 1 week ago* (last edited 1 week ago)

Ok then state this model's Pro aspects and it has to be more than 4. I know one of them would include furry and cartoons anyways

[–] [email protected] 2 points 1 week ago* (last edited 1 week ago)

Now imagine them reverting back to the old style after all the blamestorm. My mega will have gigs imported per day 😂

[–] [email protected] 1 points 1 week ago* (last edited 1 week ago)

It's not super complicated to setup your own image generator instance. Civit.ai has good tutorials and all of the models. If a Perchance gen isn't working try setting up your own.