this post was submitted on 06 Sep 2023
23 points (100.0% liked)
LocalLLaMA
2835 readers
61 users here now
Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.
Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.
As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Same here. Pygmalion-6b was one of the reasons I got started playing around with LLMs as a hobby. And then the leak of the first LLaMA and subsequently Alpaca and me finding out about llama.cpp
But we've come a long way. I remember fine-tuning character descriptions for days to make pygmalion understand how to play that character. And it could barely follow narration. But I think I was happy with the adult stuff. I suppose that's also simplistic by todays standards.
A model of today gets a fair amount of the nuances and consequences of their personalities right. And it easily follows narration without me repeating every third sentence that we're still sitting at the kitchen table and talking... I'm always amazed when there is a sufficient advancement so I can actually feel things getting more intelligent and capable.
I haven't yet tried the model/fine-tune you mentioned. I'm currently at MythoMax.