this post was submitted on 06 Sep 2023
23 points (100.0% liked)
LocalLLaMA
2823 readers
18 users here now
Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.
Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.
As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
text-generation-webui "chat" and "chat-instruct" modes are... weird and badly documented when it comes to using a specific prompt template. If you don't want to use the notepad mode, use "instruct" mode and set your turn template with the required tags and include your system prompt in the context (? I forget what it is labeled as) box.
EDIT: Actually I think text-generation-webui might use
<|user|>
as a special string to mean "substitute the user prefix set in the box directly above the turn template box". Why they have to have a turn template field with "macro" functionality and then separate fields for user and bot prefixes when you could just... put the prefix directly in the turn template I have no idea. It's not as though you would ever want or need to change one without the other anyway. But it's possible that as a result of this you can't actually use<|user|>
itself in the turn template...Seems easier with SillyTavern. They've included screenshots with recommended settings for that in the blog post.
TBH my experience with SillyTavern was that it merely added another layer of complexity/confusion to the prompt formatting/template experience, as it runs on top of text-generation-webui anyway. It was easy for me to end up with configurations where e.g. the SillyTavern turn template would be wrapped inside the text-generation-webui one, and it is very difficult to verify what the prompt actually looks like by the time it reaches the model as this is not displayed in any UI or logs anywhere.
For most purposes I have given up on any UI/frontend and I just work with llama-cpp-python directly. I don't even trust text-generation-webui's "notebook" mode to use my configured sampling settings or to not insert extra end-of-text tokens or whatever.
I had exactly the same experiences. I use Koboldcpp and also oftentimes the notebook mode. SillyTavern is super complex and difficult to understand. In this case it's okay. I can copy-paste from screenshots (unless the UI changes).