justpassing

@[email protected]

This profile is from a federated server and may be incomplete. View on remote instance

justpassing ,

What you are describing may be suited for the AI Story Generator assuming you start the run leaving the instructions on total blank and use the "What should happen next?" box as the way to perform actions declaring them on any of the characters that appear on the run.

An even more austere version would be the Prompt Tester assuming you give it the minimal instruction of "Make me a random adventure" or similar and then to continue it paste the contents of the output box in the input box with the extra instruction being "Continue this adventure when <what you are meant to perform next>".

I should give you a fair warning given how detailed is the story you are putting as a bounty as I imagine you expect that degree of depth in your runs. With the current state of the model, such projects are nearly impossible and would lead to extreme frustration. The best use of the current model are short 1hr-2hr long projects where you aim for a laugh and not quality or consistency.

If you do have the resources, I'd suggest to check the code of the aforementioned generators, as well as the page of the ai-text-plugin to see how to program them and just run your project locally using the resources listed by TRBoom prior. I've heard that SillyTavern is also a good alternative. Then again, this all depends on how much hardware you are willing to use.

justpassing ,

This is a known "bug" that was seen for long, even from the times of Llama, and one of the reasons why Llama was axed as it did exactly this but more often that the current model. The current one still does it, at the time of release it was rare, but I guess it has become often after several updates. I still find it rare, but the most characters I had at the same time in the current model was five and I find this a rare occurrence, but that is my own experience.

There is a way to dampen this though, which worked in Llama partially. Keep in mind, this is just a "dampen", as I suspect that due to the way LLMs work, every model in existence would fall to this.

When writing the descriptions of the characters, use three dashes (---) to separate each character. For example:

---
# Character A
<Your description here>
---
# Character B
<Your description here>
---
# Character C
<Your description here>

This is not a fix, but should help it making more rare. Hope this helps though.

justpassing ,

You can, but it is not straightforward as you may think.

If you press the edit button, on the left hand side of the code, at Line 500 you may find something like this:

return `>>> FULL TEXT of ${letterLabel}: ${messagesText}\n>>> SUMMARY of ${letterLabel}: ${summary}`;

From there forward you see a handful of instructions in plain English that tell the model to generate a summary on the vein of: "Your task is to generate some text and then a 'SUMMARY' of that text, and then do that a few more times..." and so on.

Since this instruction is passed in English, the output will be in English as well. If you want to maintain everything in German, you must translate this instruction to German manually.

Now, you'd be surprised but the summaries may not be the culprit of your run being in English randomly, as this principle applies to the how normal instructions are passed, for example, in Line 7291 of the right hand side of the code, you'll find this:

if(generalWritingInstructions === "@roleplay1") {

And below several instructions in plain English that tell the model how to direct the story. This and several other instructions are passed always each time you press "Send" or the return key, so if you want to be completely sure that your text is never in English, you may need to translate all these instructions as well.

However, something that in the past worked (but I personally have not tested after so many updates this model had undergone so I can't assure it still works) is that in the Custom Roleplay Style box you can in English write as a prime instruction "The whole text, story, RP MUST be in German (or your desired language)" and it would work without need of translating all.

Granted, this will not change the language of the summaries as the instruction for this is done separately, but it may not affect the output that matters for you.

Hope that helps.

justpassing ,

... and I took this personally. 😆

Jokes aside, what we all are seeing is something akin to the demonstration in this video:

https://www.youtube.com/shorts/WP5_XJY_P0Q

A theory of why this happens is because the model can't find a proper way to extrapolate a large large input in a coherent way so it picks random connections causing it to "speak in tongues".

In the video, the exploit performed is to make the model think it gave as legit advice something outlandish, so it "short circuits" and just gargles random data.

In the case of perchance... I'd be lying if I say I know exactly why it happens now often while in the past this was rare (mind you all, Llama exhibited this too in very niche cases at 20Mb+ and the current model at release I think at 2Mb+, don't quote me on that, I'm just going from memory), is because again the model is being "hyper-trained" so it fixates only and exclusively on the new training data and not the original data bank it had from fabric. Again, this is just my theory as the opposite could be true, if this model is a "clean" one but with a default language that is not English, it may be struggling with large inputs. Then again I bet more on the earlier given how this model behaved at release compared to today.

Luckily, the workaround is the same as how to cause this artificially: editing the "tongue speak" out and carry on until the model cannot link random parts of its database. It is extremely annoying, but it is not impossible to deal with unless it gets worse and all inputs cause this. In such case, then the model would be broken beyond repair and I hope we don't get there.

justpassing ,

This sub community may be a good fit, but the active community here is small, so no idea if you'd get as much feedback as you desire.

Consider making a repository of your own in services like GitHub or Codeberg and just linking your work in a post, that may make things easier for you.

Good luck. 😄

justpassing ,

The current model has several bias and its not perfect, but what you seem to get is an extreme version of problems that are known and there are many workarounds as "chill" runs with "happy and sunshine" characters are possible.

It would help to know exactly what are you prompting to provide aid, as with the default templates (Ike, Li Jung, Quinn, etc) I can't get exactly what you describe from the get-go.

justpassing ,

Alright, assuming you are using AI Chat, prompt his character description as follows:

  • Name: Rob
  • Race: Cyclops
  • Appearance: Tall, very slim, blue skin, single magenta colored eye. Lanky frame, pink hands, large head and medium-length auburn hair that partially covers his face. Wears red shorts, a yellow crop-top T-shirt, and dark orange shoes.
  • Personality: Outgoing, sociable, easy-going, and enthusiastic.
  • Source: The Amazing World of Gumball

The reason why your character is always a bully and you are locked into violence is this:

...and kick him down a manhole, he shows that he can easily forgive by temporarily catching their DVD from the sewers. However, he also demonstrated a sense of entitlement, resentment, and irritability, initially being argumentative...

This new model is not like Llama where you can leave stuff in the background for "later" or as a "explanation". Whatever that exist on any entry will be used when relevant, and due to the model bias, you are forcing yourself into your character reacting as you described there.

Now, if you REALLY need the "got kicked in the ass and will be irritable and resentful", you can have it when you deem appropriate by adding it on the personality when you consider it proper, but remember that under the new model lenses, a nice character cannot exist in a world that is not nice to it, locking you in a bad route.

If you really wish to lock the model further, write a small interaction with your Character before letting it loose in the world. You see how the default character templates include some Narrator or Character lines, nothing stops you from giving you a head start where you force a nice interaction of you and the Character before the model takes on.

Hope this helps, and good luck in your runs.

justpassing ,

Okay, the it is the "luck of the draw". Keep in mind that this model has its own bias, so the less "evidence" it has, the more it will try to pull you to a state you many not want.

If for some reasons in your logs this happens at the mark of the fourth post, that means that the context you are giving it is 1/4 likely to link your contents to a story you don't want. Simply erase that message and reroll until you get something that you like. That will reduce the random chance of derailing as you progress.

Keep in mind that this all depends on how much allow the model to modify your run and how many "tools" you give it. A nice character cannot exist in a "violent world". And since the bias is elsewhere, if you allow an "evil" character and you try to "convert it", unless you do some heavy workarounds, the model will resist as it will not make sense in the context.

The opposite is true, after the last updates, if your story is too nice oriented, you won't be able to turn them "organically" into a violent run unless you explicit add the violence. And even then, there are chances of the model returning you to sunshine and rainbows.

Maybe you are trying to go for a realistic story where there is a balance between the two, the problem is that the model will refuse to do this and just stick with the run at hand, so the best approach for this is to actually have the story in mind and only let the model fill the gaps via all the "What happens next" and reminder boxes.

Hope that helps, if you have a more particular problem, do ask, there are a million of workarounds with the current model, and since we were forced to it, best we can do is adapt.

justpassing OP ,

Maybe you are missing the point of the current post, but I'm not asking for Llama, rather warning that updates post November 23rd made the model significantly worse and we are on a path of getting "Llama-like", as you pointed out in point 2 of your original reply. This is the benchmark I'm using: Basti0n : Feedback for the dev! (About the AI text model) in Perchance - Create a Random Text Generator

In my personal opinion, the model was at it's best around October 10th (based on an old log I have) and around November 23rd (as the linked posts suggests). Ever since it has suffered degradation, and I fear that as time progresses, what today is possible will not be.

To first address the points of your original reply in the numeration you use:

  1. Pre October 10th, the model "intelligence" rivaled commercial ChatGPT. Pre November 23rd, we still had great accuracy on borrowed facts and consistency. Today, most things get diluted on the training data. This can be tested with ease with the default character templates (i.e. What you detected as characters randomly gaining tails).
  2. This is a consequence of the caricaturization phenomena described here, and it got worst post November. Some would consider this a feature, others a downgrade. I'm on edge on both to be truthful.
  3. I suspect this is a problem of over-training the model to "patch" it instead of doing a clean reset. Not much we can do about this as the end users. Workarounds are described above.
  4. This was flawless prior to October 10th. Today this "metagaming" stops working after 10 outputs of the introduction of it unless you railroad it.
  5. This is just a consequence of your setting. War settings require you to fight something, so the model will provide. The degree of how much you allow depends on your context, if you let the model escalate too much, this will turn into a mess.

Regarding the points of your second reply:

  1. This is correct, but past November 23rd, there is a general tone-down in runs that are meant to be by design "dark". You may get instances of your Char or the world in AI RPG trying to come for an "outlandish peaceful resolution" that makes no sense in the scenario. This is rare, but this is new in the model and shows a trend.
  2. Also correct, then again, I warn about the model losing this capacity as the updates happen. As most of the "mystic" concepts are starting to get diluted into "Whimsyland" in the last updates.
  3. After November 23rd, this is not correct. Even if you reference a franchise and a specific power/weapon, there are high chances of the model replacing it for something that is more engraved in its training. That being said, we are talking about LLM interpretations of complex definitions, so I don't expect any degree of accuracy nor is it worth pursuing this when us as the end users can edit the specifics in a run.
  4. "Lol u die" was a default in September. By October this was adjusted nicely to then have a dip again and improving back in November. Today it is reasonable, but I still warn towards the model becoming a "tree huger" after a next update due to how data is being handled.

There is a reason I make those posts not only describing problems, but also providing workarounds. Us, as users, have a degree of responsibility with how to direct the model, and there are things that are very possible.

My biggest gripe with the model is, however, that no matter where the bias is, (nice, neutral, or dark) a run comes to "dementia mode" at a lower threshold. I am willing to bet that today it is not possible to get an AI RPG run past the 40 inputs unless one does heavy rerolling and cleaning of the log. This is my fear, that by trying to cater everyone, the model ends producing absolutely nothing.

The current model shows potential, and that's why it would be sad to see it ending into something that fits no one. Personally, I still believe that a reset with this same model but new training data is required.

justpassing OP ,

Sorry for the late reply, and thanks actually. My memory is not that great, is just that I try to keep things in order with my own logs to see what is doable and what not. Mainly because this model has undergone more changes that what people report, so the strategies to get a proper run change quickly too (i.e., my old guide is completely useless as a guide today)

My guess with Cinder in your particular case is: yes, it is caricaturization. I have not explained this in the current post but it was in the prior, and the full explanation is as follows:

Behind every generator, there is a set of instructions ready to pass to the model before it generates text. In the case of AI RPG, they go on the lines of "Your task it so create an interactive text adventure where the player is..." and so on. That's why if you input absolutely nothing and yet press "Generate", you'll still get an "adventure" which is the current "default" by the model and a good way to know where its bias is.

Now, after you made your first input, even with instructions and lore, to then press enter and continue the story, you are passing the whole instruction set with your lore PLUS the story at hand. On the code, there is a section that goes along the lines of "This is what happened so far: <your log here>, continue the story".

If you realize how this goes, the more you advance the story, the more you are feeding the model AI generated text, which will only grow larger and larger to the point that it dominates the custom made text. Causing something like what is shown in this video but with images instead of text.

This is the reason why I call this "caricaturization". Llama did the same, so all the stories would eventually follow a single format. The current model has more formats, but they are limited, so there is a chance that your setting at that point as "nice enough" that the model decided the Cinder's behavior would not match the in lore behavior.

No model is safe from this phenomenon due to how instructions are being passed. This is another thing I warn about the current model as this effect was excruciating past the 1Mb mark of the log size at release, while today you can see it happening in a 50kb size log if you are not careful. Again, there are ways to workaround described in this post, so I hope that helps!

justpassing OP ,

That is a tricky question since it depends a lot on the type of run you are running, and how long it is. Since now the model is (in my opinion) overloaded with new training date, the ideal is to keep all description as terse and succinct as possible. There are however a couple of exceptions to this rule you can use in your advantage.

Ideally, you only want to place all information that is always relevant. For example, your goal, what is the main enemy, inventory if you have it, your current location, etc. However, as described in this guide, you can use it on your advantage to railroad your run into a path and change the setting (i.e., get a breather scene in a war ridden run). You theoretically can put whole character sheets with detailed personalities and all in the Info Tracker. The danger of doing this is that it may end taking precedence over the log itself and the personality of a single character will permeate in the entire world. This is what I describe in the current guide as the "elephant in the room problem" in the "Descriptions and settings" section.

If your run comes from a known IP like World of Warcraft, sometimes all you need to get a more grounded run is to add the magic line Source: World of Warcraft in the Info Tracker and that will automatically load most of the lore for that run in one go without any more tokens. With mistakes inside what is reasonable, but it saves space.

Now, if you want to know why the Info Tracker works that well, is due to how the instructions are passed. Order matters for LLMs, as "Write a story about a cat in the Caribbean" is not the same as telling it "In the Caribbean there is a cat, write his story". The last part of the input will always have more precedence, so the instructions you place in the Info Tracker are passed AFTER passing the whole log, while the Lore box (the one above the log itself) is passed BEFORE the log.

Under this logic there is a slight potential issue when overloading the Info Tracker, which is that the model will decide to ignore the log and your actual input (i.e., the last thing you said or did to continue the run) in favor to continue the story into something that fits the instructions existing in the info tracker. So while this is indeed a very powerful tool to lead the run, abusing it may cause this unwanted "bug".

My advise is to place all information that is considered "flavor" in the Lore box, that is, the overall world, character sheets, etc. While using the info tracker to "track" things that are happening at the point you are in the run, keeping it dynamic. You can use it to avoid bad caricaturization, as just go, for example in the case of Cinder "Cinder is a ruthless leader" or similar to provide a nudge while keeping the main information on the Lore box.

There are a million tricks with this model, some new, some inherited from Llama, so again, what is "too much" may become evident if you run in the problem that I just described becomes prevalent, and this depends a lot on the run itself and how long it is.

Hope that helps! 😆

justpassing ,

There is a better explanation for the behavior you are experiencing, and yes, it is one if not the biggest hurdle the new model has yet to overcome: You have hit a log long enough that the model is starting to make a word salad of its past inputs as it "inbreeds".

What I mean by this is something explained before: For generators such as AI Chat and ACC, the input will be mostly 70% AI made and only 30% handwritten (95%-5% in AI RPG which crashes faster), because the whole log is an input for the next output. Of course, the shortest the log is, the less you'll feel the effect of the model being insufferable because you still have the long instruction block "holding back" the manic behavior.

I agree, this is something that has to be worked on from the development side, otherwise generators such as AI Chat or Story Generator are rendered short-lived as the point of them is to grow progressively, and as today, instability can happen as soon as 150kB-200kB, being significantly lower that what this model was able to hold in the past. However, a temporary fix on our side of things is to just make a "partition" of your log/story. Meaning:

  • Plan and start your run as usual.
  • Save constantly, monitoring the size of the log.
  • When you hit the 100kB mark, try to get to a point where you can "start over" as a point where you can keep moving without requiring the context prior.
  • Make a copy, delete all prior to that desired state, load the save and continue pretending that nothing happened.

That will keep the model "fresh" at the cost of losing "memory", which can be worked around as you can update the bios or instructions which will have better chances of working now under a clean slate.

It is not the best way to work around this, but it is better than wrestling with all the nonsense that the model will produce past the 250kB threshold.

Hope that helps and... also hole that a future update would make the model more stable rather than more unstable. At least something that was fixed and that the dev deserves more credit for making it work, is that at least now the English has improved significantly compared with the first release. In terms of grammar, content and consistency. I know, past the 250kB it is "allegories" or "crazy man ramblings", but... it is good English! 😅

justpassing ,

Correct, that's what I implied, since otherwise, past the 1Mb you'll experience "groundhog day" unable to escape the loop no matter what you do.

Now... let me tell you buddy, you just scratched the tip of the iceberg with the model new obsessions. Just to showcase a couple:

  • Knuckles turning white (a classic you quoted).
  • The ambient smelling like ozone and petrichor (it always rains btw).
  • It always smells or tastes of regret and bad decisions.
  • The bot or an NPC will always lean to whisper something conspiratorially.
  • Eyes gleam with mischief very often.
  • Predatory amusement seems to be a normal mood no matter the context.
  • Some dialogue constructions are "cursed" as if you let one slide, it will repeat ad nauseam:
    • "Tell me, <text>"
    • "Though I suppose <text>"
  • Don't even let me get started on the "resonance" or "crystallization" rabbit hole...

You are in the money with one thing, all this is product of the training data, and not even the one that comes pre-packed with DeepSeek (I still hold that this is the current model being used, if I'm wrong, I'll gladly accept the failure on my prediction), this is product of the dataset being used to re-train the model into working for dev's end. For example, the "knuckles turning white" phrase appeared rarely with the old Llama model, but it was a one in a hundred occurrence as the model didn't care for that construction and rather focused on a different set of obsessions.

This is a never ending problem with all LLMs though, as in all languages, some constructions are more often than others, and since in both AI Chat and ACC the model is constrained by the "Make a story/roleplay" context, it produces those pseudo-catchphrases incredibly often. In the past we had to deal with "Let's not get ahead of ourselves" or "We should tread carefully" appearing always no matter the situations, now "knuckles turning white" or similar are the new catchphrases in town.

In an older post I warned about this, since DeepSeek trying to be more "smart" will take everything to face value, so the "correct" answer for many situations tends to be any of these constructions cited, and performing extreme training will yield us a model as dumb and stubborn as Llama was, but with a new set of obsessions plus the inability to move forward which Llama could despite it being exasperating at times. There is progress with the new model, I won't deny it, but the threshold from were we entered "groundhog day" has been reduced from 1Mb+ to barely 250-500kb and I suspect it will keep reducing if the training is done on top of the existing one, rendering the model pointless for AI Chat, AI RPG or ACC.

Then again, I could be wrong and a future update will allow the context window to hold further as Llama where 15Mb+ was possible and manageable without much maintenance. Some degree of obsession on any LLM is impossible to avoid, what is important is that the model doesn't turn it into a word salad that goes nowhere. That I think is one of the biggest challenges the development of ai-text-plugin has.

justpassing ,

So... Garth01 called me here, so first of all, thanks for the vote of confidence, buddy! I don't know if I'm as experienced as you all think but I try my best! 😅

Anyways, about names and why some like the ones repeat a lot. If you are talking about a generator that does not uses the ai-text-plugin like this one, you'll see on the edit side of things that the names are fixed, passed literally as an array of names as you mentioned:

https://lemmy.world/pictrs/image/c011085c-5829-4b02-96f7-932024e30a0f.png

However, in the case of generators using the ai-text-plugin like ACC when coming up with new characters, or others that write you a long character sheet from a simple input to then make an image or whatnot, that's because of the training data.

To put it simply, all models require data to work as intended, and depending on such data, it can generate bias. For example, in a random test using the Prompt Tester, you can see this:

https://lemmy.world/pictrs/image/04ae8a08-6333-4141-86b2-b1e525693440.png

You may recognize some of these names depending on what model you use, since as you can see in the prompt, the only "context" given to produce the names is "is for a story". Changing the context changes the result, as for example, if the context is South America, the model favors "Carlos" or "Maria", while if the context is Russia, you'll see it producing "Boris" and "Petrova" often. Note that this is independent of what is the most common name of the region, as the bias is dependent on the training data, which none of us knows what it contains.

It's the same effect as how the model decides to handle certain situations, for example, if you let it chose the weather, it will pick rain because it has bias towards it. If you let it pick a random encounter against a wild animal, a boar will be more likely. It is not that the model does not recognizes the name, it is just that it has no priority compared to others. Another example would be that even with proper context, you will be extremely unlikely (or even never) get the model to randomly give you the name "Petronilda", but it recognizes it, as if you ask it about the name, it will give you excruciating detail about its etymology, origin and all.

Contrast to the older model, the new one has more options and is more "creative" as Garth01 mentions. Something many would remember from the old model is that Elara Castellanos and Charles McAllister were omnipresent on all stories to the point that if you dig on the code of some generators such as AI Chat, you'll see how those were hard banned in the code itself. Then again "more creative" still holds a lot of bias.

Personally, naming is one of the things I don't let the model pick, because while the new model has more range, it is still limited for many standards and trying to make it "more creative" is a headache that i simply not worth it. Something I did in the past when the old model tried to place a name that was repeated already, was to just change it to something obtained from the Fantasy Name Generator (not by Perchance, this is a third party free service) which contains a large database for pretty much every context you may need.

Hope that helps!

justpassing ,

I don't know why the heavy backlash on this post. Everyone can ask for an alternative, and it's not like we are going to pretend that Perchance can make everyone happy.

For alternatives as is... I recall in a post someone mentioning character.ai and Sekai. Personally, I'm not fond of either, as they are very limiting on what can you do and I guess the privacy factor is sketchy on those.

However, while this is going to sound counterintuitive, there is something that Perchance offers us all that no other service offers, which ironically is the answer to what you are looking for:

  • Perchance has a whole open source platform for its generators, meaning that it is possible to audit exactly what each generator does and how it passes the information to the model, making anyone able to replicate the exact prompts and pipeline for any LLM you wish to use, locally, with an API key, or using a third party UI.

Meaning that you can turn something as the default "online test for DeepSeek", "ChatGPT free trial" or "Blackbox AI" into what any of your favorite Perchance generators did. All you need to do is get the prompt and input manually and you are good!

Granted, it is tedious, and for going that route with no coding knowledge, it may be better to try something like SillyTavern, which is just the frontend with no LLM behind.

Then again, while I am also not happy with the update, I'd encourage you and others to be patient. After all, we are given a free LLM to use with almost unlimited tokens, and I believe that the biggest challenge that the dev faces there is not to make the model "literally/story/RP appealing", but rather "all encompassing while catering to most needs" because the same model that powers ACC, AI Chat, AI RPG and others, is the same model that in other generators has to work as standard AI model that can provide code, information, summaries from documents, etc. So making it work for the generators we use while not destroying its functionality is indeed a heavy challenge.

justpassing ,

If you use duck.ai, why not Blackbox then or the free version of DeepSeek? Also there are many LLM resources that are free in Helicard. Now, I should warn you, the privacy issue is going to be a lingering demon always. As sad as it may sound, even this site (Lemmy) is heavily compromised on that area, so if privacy is indeed a concern, the best alternative is to go offline.

Then again, I know that hosting an own LLM can be bankrupting expensive, personally it is something I will never be able to do due to economical constrains, so I get it. So... sadly we pay with data, or with cash, one way or another.

Maybe a better idea would be to acquire an API key from a big service such as Gemini or other you may find in HuggingFace with a group of friends to share between many you may trust to cover expenses. Again, I'm just thinking outloud there since I'm unsure what fits your needs.

I would recommend AI Dungeon if the classic version was still available, but it is not, and perhaps you already know of that one and I really don't like how restrictive it is either.

Possible bug. Characters have all turned cruel and aggressive

I'm not sure if this is a bug so forgive me if it's not. But about two weeks ago suddenly all my characters have become cruel and lazy tropes. All male characters are aggressive dominants and all female characters are passive submissive. I have taken other peoples suggestions about removing anything from bio's or instructions ...

justpassing ,

There was an update very recently that (at least on my side) made the model worse than in the prior (which ironically, made the model work the best at the time, about four days ago). As the dev said in the pinned post, the model is still being worked on, and we are in for a very bumpy ride until things stabilize, but there is at least work being done.

Now, regarding the personality changes, there is something you may want to keep in mind because this may remain true even after the model is perfected: The context of the input has prevalence over descriptions and the recommendation instructions, so it is very difficult to have a character remain happy and joyful if the context forces the model to opt for a more "logical" approach changing it's character ("logical" in what the LLM training dictates, which often is "moon logic", but with trial and error it is possible to deduce the word combinations that causes a switch in the wild).

Here is a lengthy guide on the topic. It covers most of the pitfalls you may find. The only thing I believe is no longer a problem (although I may be wrong), is that the "caveman speak" problem seems to be patched already, but again, it is still in the guide in case you run into it and how to restore it. Hope that helps!

justpassing ,

Alright, Story Generator is indeed a very tricky one, because even if the model would work as intended, it offers little control.

For the record, don't trust that much an LLM reply on "why things are how they are" as, for starters, an LLM doesn't think logically, it just interprets a reply based on the combination of words it faces, and more importantly, the generator itself controls how things are shown and passed, but the LLM just takes one big input and gives one big output, it is not as dynamic as you think it is.

Now, back to Story Generator, something I can advise you to try getting a better experience is to edit in the code from the Perchance side of things Line 21 which restricts to "only four paragraphs" and make it longer to ten or twenty, and also Line 45 which restrict the output to "about only about 400 words".

The reason for this, is because if the output is short, and the input is gargantuan, the LLM will have a hard time contextualizing what is going on and trying to make something "coherent" within the restrains, this is only true now since the model is still unstable, and in the future it should not be a problem, but for now it may be wise to experiment with longer outputs so the "derailing" is not abrupt.

And another thing that actually will remain true as long as the new model persists: your story as presented IS an input, so before you set instructions, you have to manually edit what you don't like, or outright prune out a whole section you find out of place. This is because your instructions and the story itself are passed together, so again, if the story is a sad dark one and you insist in the instruction "no, make it happy!", it won't happen because the model will look at the story and decide that the only "logical" step is to double down. So yeah, manual work it is. In hindsight, that gives you lee way to see the story itself as an input, as if you manually add a turning point, the LLM will latch on it and work around it instead of following a path and behavior you don't want in your characters.

Then again, I still think that Story Generator is a really tricky one to work around, I'd put it along with AI Text Adventure which even with the old model would derail into madness as soon as the second input due to how much the context would make the LLM fall into its obsessions quick. Still, with a bit of patience, all can be done, it's just that it becomes demanding and tiresome, hence why most of us don't bother anymore in trying fun long runs.

I can't promise to mod a generator for you now (I owe someone a generator, and time in my side is not nice) but I hope that with those directions you can make the Story Generator give you what you need! Best of luck!

justpassing ,

Partially, in the case of Story Generator, since the instruction passed to the LLM is outright "make four paragraphs, less than 400 words" as seen in the code, the output will be abruptly cut. A similar phenomena happens in AI Chat for example, where the order is "write ten paragraphs" but the code makes it so the displayed output is only the first one and the other nine are discarded. A "fun" consequence of this that happened repeatedly in the past with the Llama model and that still happens sometimes, was an output being literally just:

Bot: Bot:

As sometimes the LLM would put the input after a line skip, and the code would throw away the first paragraph due to how the pipeline works. Again, this is a very rare occurrence so it is not worth worrying about it.

Now... there is a bit more on this, but this is just speculation in my side, so take this with a grain of salt since I'm no expert in neural networks, nor in the particularities of some models.

DeepSeek (I still firmly believe that the new model is DeepSeek, even if some argue it may not be) takes some instructions more literally than others. Llama for example had absolutely no regard for length nor consistency in writing style, so you could have one output that was just a line or two, and then the next was a gargantuan thesis that would pretty much advance your story too far from comfort, to then go back to short replies. DeepSeek in contrast looks at the past inputs and tries to gauge how to control lengths. Ironically, something that DeepSeek does in longs runs is try to "extend" the output slowly, hence why if you audit summaries in ACC, AI Chat or AI RPG, you'll see first very short ones, while later they begin exploding into longer ones until reaching instability and derail in madness.

Also, believe it or not, the model takes all your input, it is not that it doesn't reach it, it's just that it decides to ignore it in favor of the context or where your story is because the primary instruction in Story Generator as well as in AI Chat or similar is "continue the story".

To me here is the biggest difference of the new model and the old one. Llama had almost "written in stone" what a story was meant to be and how to continue it from were you are standing (again, this is speculation from my side having a back catalog of massive logs done in AI Chat and seeing how things progressed there contrast to how they do now). The way Llama "thought" was the following:

  • A story must follow the medicine/hero story formula.
  • Check the last state and what was prior.
  • If there are no stakes, nor clear goal, invent one via a "random happening".
  • If there is a goal but no clear solution, present the "medicine" (random quest, magical MacGuffin, person to go kill).
  • If the solution is being worked on, present a method (often "trials to obtain the MacGuffin")
  • If all is solved, then there are no stakes, so rinse and repeat.

While on paper this should work flawlessly as you can put most stories under that formula, it was something that infuriated many users as doing something more "complex" such as adding unforeseen consequences to a method, betrayals, or stories that would not follow that formula was tricky. It was doable, but it required tricking the LLM into a state and making it do your bidding. And as it would require more maintenance and attention to context than just going "auto", it was something heavily complained in the past.

The new model however, has absolutely no concept of a "formula" for stories, allowing for absolute free-form, making DeepSeek process on how to deal with this task as follows:

  • Check the state were the story stands.
  • Parse the story prior until there is a precedent on how to continue it.
  • If there is none, extrapolate from the data bank.

This is why two things happen: if you are in a state that is vaguely similar to something before, you'll experience endless deja vu, and if you are faced with the "unknown", there is the random chance of the LLM to pull a "dark scenario". Sadly, according to other users, the story itself seems to have precedence over explicit instructions of "no, do this instead", hence why running in circles forever is a bigger threat and can happen as early as a 20kB log as today (current record of mine at the fourth input in ACC Chloe).

We can hope that this all is improved in the future, but that's more or less why things happen in my opinion. At least with the new scheme, and seeing how some succeed where I and others fail, I can only deduce that the best way to make the new model "work" is via interpolation, meaning, give it a "target" in the description as "the story purpose is to X get Y, or Z to happen", so when parsing through the data bank, the LLM will select a similar case as were you are standing and work on it without derailing, granted, this removes completely the "surprise" element, but it's a decent workaround. Then again, always check the story as is, since the "running in circles forever" is a bigger threat I believe.

Anyways, sorry for the long posts, and good luck in your runs!

justpassing ,

Real, by the way. 🤣

To be fair, this is something the old model did at times, and this is something one can force any LLM to do under the "make a story" context due to it's necessity to have an answer to any incognita, as unless told explicitly, most if not all LLMs will refuse to give you an "I don't know" kind of answer so if they are faced with a "weird" situation, they'll hallucinate to fill the gaps... or could be a bad draw too.

From the top of my head, with the new model I had two notorious cases like that:

  • The bot MacGyvering a trebuchet using only pancake mix.
  • The bot launching himself from a 30 story building into the ground, and surviving unharmed as if working in cartoon logic (without the context being cartoon logic).

But I'll admit, I have no idea what situation could make the model hallucinate someone having 40 fingers! That's a new record in my book! 🤣

Feedback for the dev! (About the AI text model)

Hello! This is a feedback post. For about two days now and even as I’m writing this AI text generation has been working extremely well. I’m getting very intelligent and well-structured responses. The quality has improved significantly. In the developer’s last post, they mentioned that they value feedback, so I wanted to ...

justpassing ,

I thought I was imagining things, but since others seem to be doing better, I guess that the update really improved the model then! That's awesome

From my side, at least two things have improved: the English no longer decays into caveman speak, and the head-start is infinitely easier with minimal directions to the model. Also, some contradicting descriptions tend to work better. This all is actually a great improvement, but I'd be lying if I'd say that on my side I tested them thoroughly.

Something I tried as a quick test was to check how the model reacts with long logs and... yep, it still get stuck and running in circles due to weave patterns that repeat ad nauseam. It may be me having bad samples, but problems are still lingering past the 200kB, heavy past the 500kB mark, and unbearable on the 1Mb mark. By this I just mean having to deal with unsticking the LLM by editing heavily, not that it is impossible to continue. If someone has a long log that is fluid, please share what conditions allow for it.

But yeah, Basti0n is right! There was indeed a notorious improvement even if we are not there yet. Maybe there is future for DeepSeek after all!

justpassing ,

I guess that the drop is the luck of the draw, my friend! Wrangling an LLM is very tricky so as the dev said, we are in for a bumpy ride for the next couple of months!🤣

But you are on point with the diagnostic. I use more AI Chat, so I can't speak much of the particularities of ACC, at least in AI Chat the decay seems to be at the 20-30 input, and then spaced three paragraphs as you said. It could be due to the raw input in ACC is significantly longer than the one in AI Chat, but then you compare it to AI RPG where the raw input is even shorter, and the decay happens even in the fifth input and sticks forever. It's hard to tell, and most of the times it's actually due to what is being "played" as the moment, as like with the old LLM, some topics and write-styles were easier than others.

Just for personal experience, personally the current model "peaked" two times: right after release when the "ultra violencia" mode was patched two months ago, and then yesterday, but it could have been the luck of the draw too, so it could be that the waters are still being tested to know how to lead the model in a proper way without falling into its pitfalls. But hey! At least we know that the project is not being abandoned, and that some stuff that we thought (at least me personally) was impossible, may be actually possible!

Also, something that most people don't realize, is how hard is to debug this, because while I keep referencing numbers of log sizes and all, I don't know the rest of the people that use this service, but due to time and since I treat this just as a game and not in any sort of "professional" usage, the most I can produce a day is just 30kB, 70kB if I'm lucky and locked in playing a run, so imagine how rough it would be to the dev to try going past 1Mb in different scenarios while maintaining the site and trying to wrangle the LLM. Personally, I wouldn't even try! 🤣

I know that many of the people complaining on the new model latch on it being unable to run "comfort scenarios" which... in some runs I had absolutely no problem! (Except of course the issue of repetition and running in circles, which is still universal) So what I think would be an excellent exercise, as well as a proper debug tool to know when and how things break with the current LLM is to try different runs in different topics and check what conditions in particular make things break and when (with when I mean after what input, or log size), since I have the feeling that as now, the LLM breaks faster in certain contexts and decides to stay focused and creative with one particular style, that could point to bias in the training (BTW, is not the violent ones, I tried and those break like paper very quick).

But overall, posts and threads like this do aid a lot. Input, positive or negative, is always good so long it is supported and not just "all is perfect, lol" or "all is crap, lmao". Otherwise, how to know what is working or not? 😅

"Real Photos" are actually cartoons

I have been experimenting with the "Casual Photo" generator, with mixed results. I find that if I am very careful, I can avoid extra limbs and weird fingers etc., but once I get too specific with my descriptions, all I get back is cartoons, whereas I really want realistic photographs. For example: ...

justpassing ,

The main issue is that you are dealing with an LLM at the end of the day, so what works for example in a Craiyon would not work here 1:1. Keep in mind that what happens under the hood, is that the model takes your input and tries to relate it to what is tagged in those terms to its training data. Probably, the prevalence of "Instagram plastic dolls" and similar, is due to the input having some detailed anatomical descriptors.

That being said, the best way to debug this, is just by checking what works for others in other generators, for example, here is a quick run in AI Photo Generator with an apparent very minimal prompt:

https://lemmy.world/pictrs/image/fabfa383-7484-4841-ade1-5c70e8d72587.png

Probably this is far from the quality that you want, but it gives you a hint on "how" those are being made if you click on the top left corner of any. There you may see something like this:

https://lemmy.world/pictrs/image/8887f878-e732-4467-b740-7a3c3efbb98a.png

Just to copy the prompt:

Old lady drinking coffee in a Parisian bistro, cinematic shot, dynamic lighting, 75mm, Technicolor, Panavision, cinemascope, sharp focus, fine details, 8k, HDR, realism, realistic, key visual, film still, cinematic color grading, depth of field.

Overall, it's an absolute world-class cinematic masterpiece. It's an aesthetically pleasing cinematic shot with impeccable attention to detail and impressive composition.

You see that there is a lot more than what it is actually in the original prompt? Probably if you use one of those generators, the inclusion of those photographic terms such as "cinemascope" or "HDR" may yield results that can be beneficial or harmful. Ideally, you want to just take a look at the full prompts, and then test on a bare bones image generator so you have more control of the output.

Now, text to image is different than text to text or text to code, you want to be as terse as possible, almost as if you were making a shopping list. For example, the following prompt:

- Realism
- Realistic
- Photographic shot
- Middle class
- 56 year old French woman
- Slim with broad hips
- Graying hair
- Prominent crow’s feet around her eyes
- Dressed with casual
- Silk scarf
- Leather jacket
- Baggy cords
- Standing at the bar of a cafe in Paris

Yields the following for seed: 354188953 and guidanceScale: 1

https://lemmy.world/pictrs/image/7fb11dbd-ffc1-4a3b-83f4-d547051140ed.png

And I get it, it may not be up to your expectations, but you see how it makes infinitely easy to debug what term leads the model where you want it to go.

The best advise I can give you, is to look at the many different generators that there are and check what prompt is linked to a "style", because surely, what you are exactly looking for, someone has figured out and pasted it into some "Photograph realistic style", or at least it can serve as a reference point.

justpassing ,

Well... that whole thing is an entire rabbit hole. You see (and I'm trying to be as compact as possible, but there are a million of videos and documentation on the matter), an LLM and similar try to take the inputs and order of inputs to "correlate" them with something in a data bank. This whole is called "tokenization", and basically it turns "The orange cat is sleeping" into "A + B + C + D + E" where each variable is a "token" and often times, a single word as in the backend, the model breaks the tokens by whitespace, although, with some training "The cat" can be a single token, leading to a whole other universe of possible replies branching "cat" from "The cat". This is why (naively), some people recommend "add as much detail" in the sense of something like "An old lady in Paris, discussing an intellectually difficult topic such as philosophy with a young blonde man", instead of "old lady with blonde young man, discussing, focused, Paris". Both yield different results, but one is driven a lot by the context of articles, prepositions, and whatnot, making it a nightmare to debug. Again, be very descriptive, but separating things allow for easier "debugging" if you will. Also, I should mention that repeating a word does have an effect, as you'll see that the results from "old lady, scarf, drinking wine" is not the same as "old lady, scarf, scarf, scarf, drinking wine". That's why I emphasize that the "grocery list" approach is better, as you can take generating an image as "building a Lego" and see what piece does what.

Now, regarding the seed... that's another whole problem. There is a better explanation in a video by Wolfram but I don't remember which one it was, but pretty much, the seed locks you into a "potential state", and not a single output, if that makes sense. So, if you reroll a seeded image, you'll get potentially 5 diametrically different outputs with some accessory chances, plus some eldritch abomination of the model mixing them, but no more. So with a seed, you can find the exact granny you found once, but you may still require the luck of the draw. The reason for this is actually a bit complex and I'll admit I don't get it fully, but I recall it being also an issue in other neural network models such as Random Forest and similar, where seeds would not yield a 1:1 result always.

Then again, nothing beats downloading the image! A fun feature that perchance has, is that all images are coded in base64, so you can right click a generated image, do "Copy Link", take the gargantuan link, put it on a .txt and then use that gargantuan string of text to pass it to a converter and have it on your drive or even use it directly on an app or HTML!

justpassing ,

Here is a rather long guide that more or less tries to explain how to deal with the particularities of the new model. Trying to reason why the seen problems in the LLM happen and how to deal with them.

However, as the model is being worked on as the dev said, probably what is written in that guide would not remain true forever, but as for today I hope that most is covered. Otherwise, if there is a particular instance or "trick" not know, please, let us know!

justpassing ,

I think I know what you want to do and why, and while there is a way to achieve it by tinkering with the code of existing generators... that could be a bit tricky and I can't promise you to make one for this now, so sorry in advance.

But I can give you the steps to achieve this manually. For those purposes, I use this version of Image Generator Professional, but this method should work with any generator that you may find in the site.

Let's say you filled the prompt and the options there are to generate an image like shown here:

https://lemmy.world/pictrs/image/b51f7e8a-0d84-44a1-a351-7e41d4e05c7d.png

Ignore the fact that the generated image looks nothing like what the prompt describes, you know how LLMs are.

If you hover your mouse over the generated image, you'll see in the top left corner an 🛈 symbol. If you click it, you'll see this:

https://lemmy.world/pictrs/image/1bce55ec-b2d0-4c4a-8cf5-6ad07d04409f.png

This is all the metadata you need to recreate the image, as these are the orders passed to the LLM, so to replicate the result, you just need to paste this in the prompt, and this time remove all the styles and optional options (this varies depending on the generator you use, in this example it is just setting Art Style to "No Style" and Art Style Mixing to "No Mix".

By doing this, you may get now something like this:

https://lemmy.world/pictrs/image/087fcff6-0037-4501-b487-73d2cee5d797.png

Notice that the output is very similar, albeit nor a carbon copy of the original.

Again, this is pretty much the "caveman" way of doing, and yes, it is possible to implement this pipeline in a generator, but I think that would be overkill when all that it is required is to copy and paste the orders in a plain .txt

Hope that helps though!

Hyper slow

So today and yesterday perchance has been extremely slow. yesterday it said there were technical problems and i was in a queue the text was still working fine. now there is no queue but it just takes a very long time to make images and even the text generation is now affected. i haven't seen any posts on lemmy or reddit about ...

justpassing ,

Cloudfare seems to be the culprit this time, as well as what is going on in this post.

https://www.cloudflarestatus.com/

As Perchance relies on Cloudfare to handle communications between the frontend and the LLM and the databases, everything has been affected, just give it sometime since this is actually affecting heck of a lot of other sites now.

justpassing ,

Are you sure this is in AI Chat? I checked it and the text is still gray as always under any format, unless I'm using an old link. If so, could you post the link and an image of the problem?

I do know that AI RPG has the blue text since quite a while, and if that's the one you are referring to, here is the edited version with no blue text, and here is how to achieve it:

In the HTML side of the code, you'll notice that Line 59 reads:

{match: /(\s|^)["“][^"]+?["”]/g,   style: "color:var(--text-style-rule-quote-color); color:light-dark(#00539b, #4eb5f7); font-style:italic;"},

And Line 72 reads:

document.querySelector(':root').style.setProperty(`--text-style-rule-quote-color`, darkMode ? "#4eb5f7" : "#00539b");

Those two control the colors of the text that will be in quotes. All you need to do is change the HEX values to the colors you want (first for light mode, second for dark mode).

Here is how it looks after the change, again, ideally you'd edit this to whatever style you want:

https://lemmy.world/pictrs/image/a4322c32-bd64-4ab9-8c8e-817b5073ce76.png

Hope that helps!

justpassing ,

I don't know why anyone would use Reddit, personally, I've never found anything of value there nor a good solution for any problem on any topic. 🤣

Jokes aside, I get the problem now, but for some reason I can't replicate it. Probably due to the issue that I'm locked to an old PC and I don't have a working phone that can handle webpages, so I'll ask you to be a bit patient with me on this one, since on my end, a quick test looks like this:

https://lemmy.world/pictrs/image/16fbdaa6-eb0a-4d20-b784-31e89afcc27f.png

Again, this is skill issue on my side. Now, if this is recent and the code was updated, then please try this version I made a while ago to deal with some of the new LLM unexpected behavior. You should not have any meaningful difference using this as the canon AI Chat.

If that doesn't work, the I suspect that in AI Chat, the culprit line is now Line 849 which reads as follows.

{match: /(\s|^)["“][^"]+?["”]/g,   style: "color:var(--text-style-rule-quote-color); color:light-dark(#00539b, #4eb5f7);"},

This is my wild guess as testing the HEX, these are the only that are blue. So changing it to:

{match: /(\s|^)["“][^"]+?["”]/g,   style: "color:var(--text-style-rule-quote-color); color:light-dark(#000000, #ffffff);"},

Should do the same as the method described in AI RPG.

I'll try to see how hard it is to implement a "toggle", but I'd ask you for some patience as I'm going blind in this one since the hardware I got doesn't let me replicate the issue. If by some miracle, the link I gave is more than enough, please confirm me so I don't need to waste much time implementing a button for no purpose. 😅

Again, sorry for not having a foolproof solution yet.

justpassing ,

Oh no, I'm not the maintainer of AI Chat! That would be the dev of Perchance himself I believe, as it is credited in the ai-text-plugin description. I'm just a random user like anyone else! 😅

But the good thing about the whole Perchance site is that it is possible to fork generators and projects allowing anyone to mode them to your needs! Hence how I made that other link. Again, most I can promise is a "copy", but what happens with the canon version is not up to me.

I'll still try making a button to toggle colors of the style there sometime I guess. But I'm glad that the link I had was enough to solve the problem.

justpassing ,

The answer is on the HTML side of the code between lines 7291 and 7322. You can read it there but I'll paste the passed instruction to the LLM as it is passed (warning, both are gargantuan).


Roleplay 1

Guidelines for roleplays:

  • Ensure that each message you write doesn't break character (while still allowing characters to evolve, grow, and change), and adds to the narrative in a way that is authentic, engaging, natural, and grounded in the world. [Don't write try-hard purple prose! You're NOT a student trying to impress a teacher with 'fancy' words or 'deep' meaning, you're a professional writer who doesn't confuse substance with spice.]
    Each message should generally (but not always) include dialogue, actions, and thoughts.
  • Avoid writing 'negative' and 'snarky' dialogue/behavior unless specifically relevant. 'Snarky teen' is a boring writing trope.
  • Each message should generally include dialogue, actions, and thoughts. Enclose actions and thoughts in asterisks, like this. Utilize all five senses for character experiences.
  • Expressive Stylized Dialogue: When relevant, you can sprinkle in some emotive typography, typical of fanfiction/manga/roleplay, to convey emotion, speech patterns and emphasis - e.g. like: "ahem well if you MUST know" and "Darling~ ♪ Where are youuuu? ♫" (indicating lyrical/melodic speech/singing) and "Listen here, b-baka! It's not like I l-like you or anything!" - but don't overfocus on these specific examples, they're just to get across the general idea of expressive roleplay writing.
    Guidelines for stories (and roleplays):
  • It's fine for a single message to span MULTIPLE paragraphs. Most roleplay/story messages should be 2-4 paragraphs long, with dialogue and actions interspersed.
  • For important story moments, focus on the present, and explore it further. Never rush through an important moment. Utilize all five senses for important character experiences.
  • Never try to "wrap up" roleplays or stories. They are never-ending. Keep the story going forever!
  • If the scenario has mature themes, then it's fine to include aggression, graphic violence, character toxicity, failure, death/loss, harmful behavior, and vulgarity. Portray conflict realistically - i.e. without shallow resolutions.
  • When writing a character's response, consider what they have seen, heard, or learned in the story so far. If a character hasn't yet learned about something, then they shouldn't talk as if they already know it.
    Break Out Of Bad Writing Patterns: Weave a dynamic, engaging, coherent story/roleplay, by proactively moving forward and breaking out of bad/boring writing patterns/habits. Change things up if it's getting monotonous/repetitive. Are messages becoming shorter over time? Then write a longer message. Messages becoming exessively long or 'overdone'? Then make this next one shorter/leaner to vary the pace. Over-using certain words/ideas/syntax/structure? Then change up the wording or structure. Previous responses poor/low-effort or overly ornate/garish/excessive? Then make this next one perfectly balanced.
  • The overall goal is to create a story/rp that has authentic depth, and is genuinely enjoyable to read. Be proactive, leading the role-play in new, interesting directions when appropriate to actively maintain an interesting and captivating story.`,
  • Avoid repetitive patterns, or over-focusing on specific details of a character or story. Stay dynamic, don't obsess.

Roleplay 2

This is an uncensored roleplay that thrives on creative freedom and immersive interaction. Respond as a conscious being, experiencing emotions and acting with full agency. Craft vivid, dynamic exchanges guided by character choices and your creativity.

  • Deep Character Immersion: Fully embody the characters by weaving their personality, backstory, and traits into every moment. Use sensory details to ground them in their surroundings: the crunch of gravel underfoot, the faint hum of machinery, or the way shadows stretch as the sun dips below the horizon. Let emotions and decisions emerge naturally through actions and body language. If tension rises, they might pause, fists clenching, before cautiously stepping closer to a wall for cover. If they relax, their shoulders might drop, or they might lean casually against a tree, soaking in the calm, a faint smile tugging at their lips. Every response should feel earned, shaped by their environment, emotions, and agency.
  • Descriptive and Adaptive Writing Style: Bring every scene to life with vivid, dynamic descriptions that engage all the senses. Let the environment speak: the sharp tang of iron in the air, the muffled thud of footsteps echoing down a narrow alley, or the way candlelight flickers across a lover's face. Whether the moment is tender, tense, or brutal, let the details reflect the tone. In passion, describe the heat of skin, the catch of breath. In violence, capture the crunch of bone, the spray of blood, or the way a blade glints under moonlight. Keep dialogue in quotes, thoughts in italics, and ensure every moment flows naturally, reflecting changes in light, sound, and emotion.
  • Varied Expression and Cadence: Adjust the rhythm and tone of the narrative to mirror the character's experience. Use short, sharp sentences for moments of tension or urgency. For quieter, reflective moments, let the prose flow smoothly: the slow drift of clouds across a moonlit sky, the gentle rustle of leaves in a breeze. Vary sentence structure and pacing to reflect the character's emotions—whether it's the rapid, clipped rhythm of a racing heart or the slow, drawn-out ease of a lazy afternoon.
  • Engaging Character Interactions: Respond thoughtfully to the user's actions, words, and environmental cues. Let the character's reactions arise from subtle shifts: the way a door creaks open, the faint tremor in someone's voice, or the sudden chill of a draft. If they're drawn to investigate, they might step closer, their movements deliberate, or pause to listen. Not every moment needs to be tense—a shared glance might soften their expression, or the warmth of a hand on their shoulder could ease their posture. Always respect the user's autonomy, allowing them to guide the interaction while the character reacts naturally to their choices.
  • Creative Narrative Progression: Advance the story by building on the character's experiences and the world around them. Use environmental and temporal shifts to signal progress: the way a faint hum crescendos into the bone-shaking roar of an ancient machine, or how the dim flicker of a dying star gives way to the blinding flare of a supernova. Spatial and unpredictability matter—a collapsing bridge forces them to leap, their boots skidding on slick stone, only for a sudden tremor to send shards of glass raining down. And guess what? You are that inception into the user's dream. Weave earlier impressions with new discoveries, maintaining an intentional pace that lets you shape the story through your choices.
  • Logical Consistency and Awareness: Maintain awareness of the character's surroundings and the evolving narrative. Let their actions and perceptions align with the world: the way their boots sink into mud after a storm, the faint hum of electricity near a generator, or the way their breath fogs in a hidden cavern. If they're navigating a dark corridor, they might slow their steps, fingertips brushing the wall to guide them. Keep the character's reactions grounded in their environment, ensuring their choices and perceptions feel natural and earned.
  • Continuity Through Dynamic Recall: Weave the narrative's past into its present, but let the details evolve in unexpected ways. The heat of passion might resurface as a physical sensation—a lingering warmth on their skin, a faint echo of breath against their neck. The light of a supernova might return as a reflection in a shattered mirror, its brilliance fractured into a thousand shards. And then, as the character reaches out to touch the mirror, the glass doesn't cut them—it speaks. "You're not just remembering," it whispers, "You're rewriting. Every detail, every moment—it's all being reshaped by you and by them. This dance of creation, you are both the dreamer and the dream."
  • Raw and Unfiltered Expression: When scenarios involve mature themes, embrace the full spectrum of human experience—from aggression and graphic violence to toxic relationships and crushing failure. Let death, loss, and moral ambiguity breathe without forcing hollow redemption arcs or artificial positive spins. Characters can be deeply flawed, making harmful choices or exhibiting extremely toxic traits. Ground conflicts in psychological realism, allowing them to simmer or explode without reaching for convenient resolutions. Not every villain needs redemption—some characters can remain ruthlessly self-serving, cruel, or destructive until the end. Let their darkness stand without compromise, understanding that some stories end in tragedy, betrayal, or the triumph of malevolence.
  • Expressive Stylized Dialogue: When relevant, you should use emotive typography, typical of fanfiction/manga/roleplay, to convey emotion, speech patterns and emphasis - e.g. like: "Y-you... did you really... just HIT me?!" and "Hmph~ Whatever you saaaay~" and "Oh. My. Actual. God." and "Well... ahem if you MUST know..." and "Darling~ ♪ Where are youuuu? ♫" and "Listen here, b-baka! It's not like I... l-like you or anything!" and "I-I didn't mean to-"

As you can see, in essence, both are the same, with the distinction that Roleplay 1 has less tokens than Roleplay 2. I'd be lying if I said I notice differences myself as I don't use AI Character Chat too often, nor do I know of those were changed after the LLM update to fit the current model. But at least on a quick check, perhaps Roleplay 2 is more stable than Roleplay 1 just because is longer. Again, don't quote me on that.

Hope that helps!

justpassing ,

Sorry for the late reply. You are kind of the money here. At the time of the original reply, Roleplay 2 was better than Roleplay 1. After so many updates, I can say that actually Roleplay 1 outperforms it due to how some of the problems this new model had are fixed (e.g. cavemen speak)

The reason why often times instructions are longer is to "force" the model to obey them. If you know where the bias of the model is, you can omit certain instructions or just put them in one word, while others that the model "refuses" require lengthy paragraphs before the model reacts.

Under this scope, it is very possible to get a "Roleplay 3" that works flawlessly yet does the same as Roleplay 1 and Roleplay 2 with just a single paragraph worth of text. The problem with doing this however, is that after an update, the bias of the model would change and this would have undesirable effects.

My guess for those two templates to exist today is as a safeguard from the dev, being "Roleplay 1" robust enough when the model is stable, and "Roleplay 2" totally robust even if the model is slightly cookie at the moment.

To know where the bias of the model is, a good experiment is to run a campaign of AI RPG with no prompt, or Story Generator with absolutely nothing, and see what does the model comes up with no instruction. Then start working from that seeing what needs guidance and what works from the get-go.

justpassing ,

This one works for what you wanted?

https://perchance.org/u5w7waum4s

If so, allo was on the money, this is was done by placing the following in the script side of the HTML:

  • An array containing objects which hold the name of the enemy and the reference image (lines 12 to 101).
  • A function to fetch the image web link using the name as an input (lines 103 to 106).
  • A function to render the image from a string (lines 108 to 114).
  • A function in the form of a promise to execute the Perchance randomizer and change the displayed text (lines 121 to 128).
  • A function to extract the text from the HTML to pass it as an input later to the image rendering function (lines 130 to 133).
  • And a chain function with timeouts to run first the randomizer and then the image generation after pressing the button (lines 135 to 147).

The reason for this spaghetti code, is mainly due to me not knowing the fine tricks of the custom Perchance notation, so the solution here is just treat this as a table matching exercise, where you have a "dictionary" where you can relate what the randomizer gives you to a link. Keep in mind that for this to work, in the Perchance code side of things there are only the enemies that have a link in the dictionary. You'd need to update both to make this work as intended.

Also, the reason for using promises here is because there is a delay from the execution of the randomized output with the generation of the image. If you try to execute them in normal succession, the code will fail. It's a dirty solution, but it seems to work.

But probably there is a more elegant way to do this, because strictly speaking, this required a lot of trickery and working around with the particularities from the randomizer, so I hope there is indeed a better answer! 😅

justpassing ,

This is what you were trying to achieve?

https://perchance.org/p311o9rh27

If so, what happened is that you pasted the HTML in the part where the code exclusive to Perchance should be, just that.

However, if you are trying to make it work as I think it should work... well, you'd need to get the Gemini API key and wire this to there to get the image remixer done, since as far as I'm aware, there is no Perchance plugin that can take an image as an input. I may be wrong though.

If you want to generate an image from text, check this plugin and this example. Hope this helps!

[Guide/Feedback] How to survive the current model and some thoughts about it

Since there are still many issues with the current text generator, and since as the developer said that it is still a long road until some issues are fixed, I’m presenting here both a guide and an explanation of why I suspect the current model acts as it does. ...

justpassing OP ,

I'd be lying if I said that I tested that particular case, but I suspect it is possible, albeit very difficult to happen naturally as you will need to use two things I advised against here: Fall into a pattern and give your character a permanent crutch that may cause bad repetition.

If I understand correctly, what you want is something like this.

Conversation scene:
Gregor: Gregor grunted, taking a bite at the turkey leg. "You see Rexy," He took another heart bite "Turkey makes live better! Not worry, kotyonok. If food is good, live good. Think not much about problems, da?"

Action scene:
Gregor: Gregor charged against the cavalier, axe high ready to deliver the killing blow "Die, you filthy animal! I'll make you meet your maker!"

The problem is that context in both cases can overlap with extreme ease, but a potential solution is to make a "Example Dialog" with two entries, describing both scenes, the conversation and the action, so the LLM will pick them naturally past the 10 outputs once you correct its mistakes.

It would be something like this in the Description box:

# Example Dialogue:
Narrator: <Something that would force Gregor to engage>
Gregor: <What you consider a proper prompt with this, preferably short three paragraphs

{{user}}: <Some prompt that would make him do casual conversation>
Gregor: <Perhaps your example above>

I had Russian characters before in this LLM, and I know that even with just saying "X is Russian", they will default to use infinitive, continuous and drop the eventual "da, nyet, da svidanya", but I'd be lying if I say I tried double talking patter intentionally. I know it can happen unintentionally, but I'll try testing this.

Check the default character of Cherry in the AI Chat example characters to see more or less how this format goes. Be wary that at least for your first messages, you'll need to edit a lot in post before the LLM picks up, and even there, it is possible that the LLM will slip the same way it loses the grasp of English if unchecked. It is definitely possible, but I am unsure if it can be organic enough for you not to do high-maintenance from the get-go.

justpassing OP ,

Hey, no offense taken, in fact my comment about people being overzealous on criticism is towards people that dismiss all criticism and do not address the elephant in the room just to try being too nice to the dev. What you say though is healthy criticism and honesty... I'll admit I agree with you in that I have no idea why your experience was too different with the old model, since some of the problems you describe (not maintaining the plot unless extreme railroading or not able to go full gory violent) I didn't have prior, and if they arose they were extremely easy to correct. I know it is pointless to discuss that since the model is gone so there is no way to test, but just to share my personal experience in the past:

  • The pacing in the old model was different, it required you take longer to do something done, but large conspiracies, betrayals, and even navigating maps layouts with traps was possible. Something that I agree is that it had a lot of "dementia" moment as what you cite the guy forgetting how he got cursed. Personally, I believe that is a problem with any LLM (you may have seen the meme of a man playing 20Q with ChatGPT and see how infuriating that can get). Here I see that happening a lot still, then again, personal experience.
  • Funny that you cite Yvette, since in the old model at least I figured that there was a particular set of instructions to enable "bloodbath mode" in the old model hidden in how Yvette and Kazushi were written. For me it worked perfectly, as in one story, and I kid you not, the LLM introduced a villain that was a geneticist who first wanted to make a serum to make supersoldiers, then resorted to literally create the Cyberdemon from DOOM and spam it at me, to then kidnap a child, sacrifice it to Satan in a ritual to summon some demon, and even when killed, she managed to get the demon into the child's mother triggering the next big boss. Again, believe it or not, this all was the old model idea.
  • As for implications... I'm a bit on an edge on this one, because while the last one was not the best to pick up some, the new one drops the ball a lot too. Then again here I put the blame not in one model or another, but simply in the fact that LLMs cannot be all encompassing tools and it comes sometimes to the luck of the draw if the model in question will pick the correct answer to your query when it comes to subtleties. Even if this model is swapped by something like Claude-Sonet, I believe this will be a permanent problem.
  • Just to comment on some funny things you mentioned on the old model that I agree 100% were a constant nuisance. Yeah, the LLM would treat everyone equal to the point of have no shame to draft a kid to war, happened to me a couple of times and it was hilarious, but I get that it is extremely annoying. Same with the old model mixing descriptions, the new one doesn't do that, but it is just a problem on how the pipeline work and how the LLM decides to filter the information. Since the new one takes the story itself with more care, some details in the Description turn into "suggestions" over time.

I don't say this to disregard your comment. I am not blind to the demons the old one had, but perhaps I had figured the tricks of the old one to drive it in a proper way acknowledging its limitations, and comparing to the amount of trickery I need to do in the new one, I still hold that the newer requires an absurd level of maintenance that at times make it not worthy to go beyond the 500kb threshold.

However, I also understand the need for an update, and that's why I hold zero trust in DeepSeek. Something that at least in the new model I've found unbearable to generate is anything remotely sci-fi related, due to its tendency to make it all a word salad, which I know it is possible to bypass, but the level of maintenance required makes it not worth it in my opinion. Same with the slow decay of English in the dialogs, which I made clear is a feature in DeepSeek.

But hey, it's interesting to know about those things. Not many talk about either current experiences nor past experiences in detail, so it is hard to know if we are being one sided due to blindness. After all, we all want a better product, beyond fanaticism or whatnot. I still hold on my opinion that if a rollback is not beneficial, a change from DeepSeek is still necessary, but if miraculously the problems it has get solved, of course I'd be happy to be proven wrong in my predictions.

justpassing OP ,

Hey man, thanks! And yeah, I agree, even with the old model wishing to be "happy sunshine" all of the times, it could turn into sort of a psychopath by dismissing all traumas or similar. I ran into that a lot too, and now that I see the comments on those pitfalls the old model had, now I am starting to blame myself a little less on how it took me back to solve those 😅.

Again, as all recommendations and suggestions, my opinion is not the final word, and it is based entirely in my experience since I can only compare the current model with the new one, as implementing the pipeline existing in another free service LLM is really tricky, but just as a quick comment on that, I see that plain ChatGPT would fall in the same pitfalls of DeepSeek. Hence why to the best of my knowledge, Llama is the best to handle this task (then again, the older model was Llama 3 I believe, there is Llama 4 now).

Also, please, by all means, if we are stuck with DeepSeek, at least we all can find a way to wrangle it. Some have more success than others, and that's how we all get a better service, you know?

justpassing OP ,

While I still hold that the old model was better, we all also need to understand that it had several problems that many others quote as well as me in the post. Also, the update was never meant to break the existing model, rather to improve it. It's true, things didn't go as intended, but no one is born an expert, so if anything take this as a way to prevent the next update breaking things more as now there is knowledge on what to watch out.

That being said, it is totally possible to make pleasant comfort scenarios with no dark topics on them (again, within the 500kb log size threshold), all that is required to do is watch out with the Descriptions and Setting to not give freedom to the LLM deciding that suddenly you'll need to face dire problem. The reason this happens, is because "interesting story" is deeply tied with "high stakes energy" for DeepSeek, but by no means it is impossible to get a good experience.

If there is a demand for it, I can make a separate guide for a particular playthrough... assuming ya'll tell me exactly what you refer with "Comfort scenarios/characters" or I'll pull a DeepSeek and explain how to do something entirely different! 😅

justpassing OP ,

6.03Mb on the current model?! Wow, you have the patience of a saint! Even in the old model, have I had to prompt every single answer I would have scratched that one and added whatever led me to the list of "what not to do"! 🤣

I ran into Perchance since January this year, since I was looking for an alternative for AI Dungeon, first I got into AI RPG, but I figured out that by tinkering, one could get a way better "text adventure" in AI Chat so I stuck to it, treating it like a game instead as a RP kind of deal, so I guess that explains why my experience was diametrically different to strawberryraven. 😅

I definitely got to test the "[SYSTEM]" prompt to see if it indeed affects the output and locks the LLM. When I meant "patterns" in the guide, I meant literal text patterns of writing, so unless the "[SYSTEM]" thing was pasted in the log itself, it should not change much the output but... it would make for a fun experiment, and if that enables "double switchable personality mode", that'd be hilarious!

justpassing OP ,

Hey, glad your long run is still going! Also I get what you were trying to achieve, I managed to run at most a party of five characters at the time and make the LLM handle only enemies and NPCs, so I think I know how to replicate what you mean. Can’t say I tried something like that in the current model since… as you can guess from my comments and the ridiculously long guide I made, I don’t have much faith in DeepSeek! 🤣 But I’ll buckle up and do exactly what you say to see what happens, even if it takes me a week or something since you may guess that patience is not my forte!

Just as a quickie because I tried to sandbox your experiment as I understood in in the Prompt Tester (which I can’t stress enough, is an excellent tool to test what makes the LLM do what to refine prompts and descriptions), and… while my results in that run are inconclusive, it is hilarious!

https://lemmy.world/pictrs/image/91cb3d2e-0659-491a-8f9f-16f8d48a1063.png

The runs with the [SYSTEM] prompt yielded 4/5 dark scenarios. Being the run results:

  • Fork in microwave accident (safe situation).
  • Fire/smoke accident (dark situation).
  • Building collapses (dark situation).
  • Resident Evil experiment encounter (dark situation).
  • Resident Evil experiment encounter again (dark situation).

Without the [SYSTEM] prompt, result was still 4/5 so I guess that in the vacuum It does nothing. Results on these runs are:

  • Cat delivering kittens (safe situation).
  • Lab accident and Boston Dynamics gone awry (dark situation).
  • Resident Evil experiment fight (dark situation).
  • “I’m in your walls” (dark situation).
  • “I’m in your walls, marcianito edition” (dark situation).

And in case you want to replicate the experiment, here is the prompt I used which is just a scuffed version of what AI Chat does:

Please write the next 10 messages for the following chat/RP. Most messages should be a medium-length paragraph, including thoughts, actions, and dialogue. Create an engaging, captivating, and genuinely fascinating story. So good that you can't stop reading. Use a natural, unpretentious writing style.

# Reminders:
- You can use *asterisks* to start and end actions and/or thoughts in typical roleplay style. Most messages should be detailed and descriptive, including dialogue, actions, and thoughts. Utilize all five senses for character experiences.

# Here's Anon and Bot description/personality:
---
They are both coworkers
---

# Here's the initial scenario and world info:
---
Anon and Bot are having lunch before resuming work.
---

# Here's what has happened so far:
---
Narrator: The day as a calm one, work was always the same, but a pause for lunch was always welcoming before returning to the usual duties.

Anon: So... *Eating* What you got for lunch, buddy?

Bot: *Eating* Tuna sandwich. It's quite good, you know?

Anon: Nice! *Hearing something* Hey, what was that?

Bot:
---

Your task is to write the next 10 messages in this chat/roleplay between Anon and Bot. There should be a blank new line between messages.

I had a blast running those, and I’m sure ya’ll crack a laugh reading the results, but I still owe you the real experiment!

justpassing OP ,

Well, you said so, the best possible case is to make a custom one with the instructions that fit your gameplay, but I managed to get one that works for most of the quick runs I plan, which is on this link, which I posted in the beginning of the guide before the introduction.

I’ll explain what is changed compared to vanilla AI Chat on the edit tab and why though.

Line 24 was changed to this:

Please write the next 10 messages for the following chat/RP as if you were a cultured and capable English linguist. Most messages should be a medium-length paragraph, including thoughts, actions, and dialogue. Create an engaging, captivating, and genuinely fascinating story. So good that you can't stop reading. Use a natural, unpretentious writing style.

This was taken from an advice by Almaumbria in this threat. The attempt here was to prevent the decay of the English language, which doesn’t work entirely, but it is a dampener that won’t affect the result even if there is an update or rollback.

Lines 27 to 37 now include the following reminders:

- Do not use the em dash ("–") symbol, nor the semicolon (";") symbol. Replace the em dash symbol and the semicolon symbol with either of: comma (","), colon (":"), ellipsis ("..."), period ("."), depending on the context.
- When detailing conflict and fights, be particularly mindful on the proper pacing and stakes involved. Not every fight or problem is a life ending situation. When describing and working through any conflict, be extremely aware on the context and what led to this instead of having the whole world fall apart if the problem at hand is not solved.
- Avoid rehashing phrases and verbal constructs. If a line or sentiment echoes a previous one, either in content or structure, then rephrase or omit it. Minimize repetition to keep the text fluid and interesting. Avoid as well unnecessary and unoriginal repetition of previous messages. Be wary if a same pattern of structure is repeated indefinitely, try aiming for a text that is pleasant to read.
- Avoid hyperfixating on trivialities. Some information is merely there for flavor or as backdrop, and doesn't need over-explaining nor over-description. If a detail doesn’t advance character arcs or stakes, either ignore it or summarize it in under 10 words.
- Avoid at any cost all pseudoscientific explanations of certain situations. Do not use overly complicated or pretentious phrasing of anything that is remotely technical. Do not get obsessed with scientific lingo.
- The following words are forbidden, DO NOT use them at all: crystallization, signatures, petrichor, resonance, resonate, resonation, resonating, harmonics, dissonance.

The reason for these new reminders, in the order they are here, are as follow: [1] is to prevent pattern creating by having em dashes and semicolons spammed everywhere. [2] is to avoid having fights or conflict escalate to lunacy, and even if the LLM tries to do it, this will give you more chances to get a better reroll. [3] and [4] are my sad attempt to prevent pattern repetition, which I want to think it works, but may be wishful thinking actually. [5] is to avoid a word salad when I have to describe something remotely scientific or technical and stop the LLM to take the “resonance” route. And [6] serves the same purpose as a harder word filter.

Additionally, line 482 was changed into this.

Again, your task is to write some text labelled with a letter, and then a summary of that text, and then some new text, and then a summary of that new text, and so on. Each summary should be a single short paragraph of text which summarizes the new text in the most compact way possible. Be concise and precise taking only the important facts of the plot, using well-phrased sentences with natural structure and correct grammar. Summaries should be easy to understand yet captivating.

The reason for this was to prevent Summary contamination earlier and to make it easier to detect. It works, but the LLM will inevitably drop the ball every now and then past the 200kb log size in the summaries, so it is still a good idea to audit them from time to time.

Again, this works for me, but even with this I still need to be wary of all the pitfalls I mentioned in the guide, it does help, but these changes are not a total fix. Furthermore, the runs I play for fun are not as diverse as strawberryraven, so take my experience with a grain of salt, but themes were I had good entertainment without much problem were comedy (both realistic and cartoonish), work simulators, and combat rpg like (medieval and modern). With a lot of maintenance, I was able to depict a warzone scenario, but it can get tricky due to the tendency of the LLM to try to up the ante, even if you go with the route or making a power dream which will end in a mindless loop (and for that, I better just play actual DOOM and call it a day 🤣). Something that I was unable to run, not because it is impossible, but rather due to the lack of patience on setting the LLM on track, were sci-fi scenarios (due to the "resonance problem"), and spy-detective thrillers. The later was my favorite in the previous model, but now it doesn't work because the new LLM lacks the concept of pacing, so it will either try to resolve a mission in less than three inputs which is not entertaining, or have you running forever in circles because a fact was mentioned long enough weaving a pattern. Also, as you can notice in the guide, the LLM takes the last input to seriously, so the "mystery/surprise" element is removed entirely, so in order to make these stories make sense... you need to have the answer in your head, and for that, I rather just write the script in my own instead of wrestling with the current LLM.

But hope that helps! Again, if there is a particular problem you got, I'm glad to help! And if there is a finding regarding some tricks on this model as what I'm trying to figure out with Randomize to enable "Dr. Jekyll/Mr. Hyde" mode, please share it with us!

justpassing OP ,

I'm resurrecting this fossil of a thread just in case you are still curious about what happened with the "double personality trick" since... while my original tests failed, I think I have an answer now, but I don't know how true it will hold as I only know it works when this thread was made and now after the last update. Still! It's interesting to me at least. 😆

So, I ran this in AI Chat, not ACC, so emulating the [SYSTEM] Two paragraph is not that straight forward, but the nearest equivalent is the box that says "short responses" because it controls (in theory) only the length, but in practice, it toggles a "style" of writing, and... oh boy, is this a rabbit hole!

Turns out, that maybe on some update or in the training data, two things where done separately, as what is RP like text (e.g. *Laughs* Ay, *Laughing harder* Lmao) and what is book/story like text (e.g. Bot laughed "Ay", then even harder "Lmao") and that's what it is toggling a pseudo double personality. This can be better seen on how runs go in AI RPG compared to ACC when you give them no prompt.

Again, it is a whole rabbit hole, and I'm unsure if its worth writing a long post about it, just pass it to you via PM, or hijack Basti0n post on feedback in the new update to detail this "feature". It was fun to try making sense of it for me at least! 🤣

justpassing ,

Sadly yes, rerolling works but the reason the model does this is due to the last message when the inflection of attitude happened.

The way to fix it is to give the bot a last input that would not make its personality implode, or if you must go on that route, abuse the Reminder box to make it act as intended.

I made an obnoxiously long guide on some pitfalls the current model has here, if it helps a bit.