I've noticed that certain terms or tags are causing rendering issues with the new model. The outputs are highly unstable and inconsistent—beyond what I would consider normal variation.
This doesn't appear to be due to new interpretation logic or prompt strategy shifts. Instead, many of these generations look glitched, underprocessed, washed out, or as if rendering was prematurely stopped. The saturation is often low, and overall image quality degraded.
I suspect that some of these tags may be acting like "stop codons", halting generation early—possibly similar in effect to using guidance_scale = 1.
From my testing, the problematic tags seem to fall into two groups:
Furry-related terms: furry, fursona, anthro, etc.
Illustration-related terms: drawing, line work, cel shading, etc.
It’s possible these tags are being masked or diluted when mixed with stronger or more stable tags, which may explain why some prompts still produce acceptable or mixed results. However, when multiple of these unstable tags are combined, the generation almost always fails—suggesting a kind of cumulative destabilization effect.
By contrast, photography and painting-style tags remain mostly unaffected and render normally.