Hi All,
I was generating a simple image and ran into a strange filter trigger that I thought was worth sharing.
I used the word “choker” in the prompt while referencing clothing — specifically a gothic-style necklace. However, I received this message:
Using the word “Choker” in the prompt referencing clothing, but got :
“This generated image may contain themes that are not appropriate for all audiences, such as graphic violence, or sexual content. This can happen due to the randomness and biases of the AI model, or specific words in your prompt.”
Everything in the prompt was fashion-based and SFW.
I swapped the word “choker” for “necklace”, and the image generated just fine — :

Seems like “choker” might be flagged due to word associations or overly cautious filtering. Just wanted to flag it in case others run into similar issues!
try pet collar. works for me
so we furry use collar instead.
deleted by creator
were i the dev, i would also ban the word “earrings” since that would also mess with people and i dont like earrings OR chokers. then id laugh evily with other crime bosses in my lair as people try to make earrings and chokers but cant.
The prompt says you can change your ‘safety settings’, would that allow it?
New model JSON prompting has a nsfw sfw on off option when scripting.





