LocalLLaMA
Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.
Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.
As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.
Rules:
Rule 1 - No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.
Rule 2 - No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.
Rule 3 - No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.
Rule 4 - No implying that models are devoid of purpose or potential for enriching peoples lives.
view the rest of the comments
Thanks for the advice. I'll see how mutch i can squeeze out of the new rig. Especially with exl models and different frameworks.
I was already eyeing it. But i remember the context being memory greedy due to being a multimodal model. While Qwen3 was just way out of the steam deck's capabilities. Now it's just a matter of assembling the rig and get tinkering.
Thanks again for the time and the availability :-)
No, it's super efficient! I can run 27B's full 128K on my 3090, easy.
But you have to use the base llama.cpp server. kobold.cpp doesn't seem to support the sliding window attention (last I checked like two weeks ago), so even a small context takes up a ton there.
And the image input part is optional. Delete the mmproj file, and it wont load.
There are all sorts of engine quirks like this, heh, it really is impossible to keep up with.
Oh ok. That changes a lot of things then :-). I think i'll finally have to graduate to something a little less guided than kobold.cpp. Time to read llama.cpp's and exllama's docs i guess.
Thanks for the tips.
The LLM “engine” is mostly detached from the UI.
kobold.cpp is actually pretty great, and you can still use it with TabbyAPI (what you run for exllama) and the llama.cpp server.
I personally love this for writing and testing though:
https://github.com/lmg-anon/mikupad
And Open Web UI for more general usage.
There’s a big backlog of poorly documented knowledge too, heh, just ask if you’re wondering how to cram a specific model in. But the “jist” of the optimal engine rules are:
For MoE models (like Qwen3 30B), try ik_llama.cpp, which is a fork specifically optimized for big MoEs partially offloaded to CPU.
For Gemma 3 specifically, use the regular llama.cpp server since it seems to be the only thing supporting the sliding window attention (which makes long context easy).
For pretty much anything else, if it’s supported by exllamav3 and you have a 3060, it's optimal to use that (via its server, which is called TabbyAPI). And you can use its quantized cache (try Q6/5) to easily get long context.
I'll have to check mikupad. For the most part i've been using sillytavern with a generic assistant card because it looked like it would allow me plenty of space to tweak stuff. Even if it's not technically meant for the more traditional assistant use case.
Thanks for the cheatsheet, it wil come really handy once i manage to set everything up. Most likely i'll use podman to make a container for each engine.
As for the hardware side. The thinkcentre arrived today. But the card still has to arrive. Unfortunately i can't really ask more questions if i can't set it all up first to see what goes wrong / get a sense of what i Haven't understood.
I'll keep you guys updated with the whole case modding stuff. I think it will be pretty fun to see come along.
Thanks for everything.
IDK about windows, but on linux I find it easier to just make a python venv for each engine. Theres less CPU/RAM(/GPU?) overhead that way anyway, and its best to pull bleeding edge git versions of engines. As an added benefit, Python that ships with some OSes (like CachyOS) is more optimized that what podman would pull.
Podman is great if security is a concern though. AKA if you don't 'trust' the code of the engine runtimes.
ST is good, though its sampling presets are kinda funky and I don't use it personally.