Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
view the rest of the comments
Be specific!
What models size (or model) are you looking to host?
At what context length?
What kind of speed (token/s) do you need?
Is it just for you, or many people? How many? In other words should the serving be parallel?
In other words, it depends, but the sweetpsot option for a self hosted rig, OP, is probably:
One 5090 or A6000 ADA GPU. Or maybe 2x 3090s/4090s, underclocked.
A cost-effective EPYC CPU/Mobo
At least 256 GB DDR5
Now run ik_llama.cpp, and you can serve Deepseek 671B faster than you can read without burning your house down with H200s: https://github.com/ikawrakow/ik_llama.cpp
It will also do for dots.llm, kimi, pretty much any of the mega MoEs de joure.
But there's all sorts of niches. In a nutshell, don't think "How much do I need for AI?" But "What is my target use case, what model is good for that, and what's the best runtime for it?" Then build your rig around that.
My target model is Qwen/Qwen3-235B-A22B-FP8. Ideally its maxium context lenght of 131K but i'm willing to compromise. I find it hard to give an concrete t/s awnser, let's put it around 50. At max load probably around 8 concurrent users, but these situations will be rare enough that oprimizing for single user is probably more worth it.
My current setup is already: Xeon w7-3465X 128gb DDR5 2x 4090
It gets nice enough peformance loading 32B models completely in vram, but i am skeptical that a simillar system can run a 671B at higher speeds then a snails space, i currently run vLLM because it has higher peformance with tensor parrelism then lama.cpp but i shall check out ik_lama.cpp.
Ah, here we go:
https://huggingface.co/ubergarm/Qwen3-235B-A22B-GGUF
Ubergarm is great. See this part in particular: https://huggingface.co/ubergarm/Qwen3-235B-A22B-GGUF#quick-start
You will need to modify the syntax for 2x GPUs. I'd recommend starting f16/f16 K/V cache with 32K (to see if that's acceptable, as then theres no dequantization compute overhead), and try not go lower than q8_0/q5_1 (as the V is more amenable to quantization).
Thanks! Ill go check it out.
One last thing: I've heard mixed things about 235B, hence there might be a smaller, more optimal LLM for whatever you do.
For instance, Kimi 72B is quite a good coding model: https://huggingface.co/moonshotai/Kimi-Dev-72B
It might fit in vllm (as an AWQ) with 2x 4090s. It and would easily fit in TabbyAPI as an exl3: https://huggingface.co/ArtusDev/moonshotai_Kimi-Dev-72B-EXL3/tree/4.25bpw_H6
As another example, I personally use Nvidia Nemotron models for STEM stuff (other than coding). They rock at that, specifically, and are weaker elsewhere.
What do I need to run Kimi? Does it have apple silicon compatible releases? It seems promising.
Depends. You're in luck, as someone made a DWQ (which is the most optimal way to run it on Macs, and should work in LM Studio): https://huggingface.co/mlx-community/Kimi-Dev-72B-4bit-DWQ/tree/main
It's chonky though. The weights alone are like 40GB, so assume 50GB of VRAM allocation for some context. I'm not sure what Macs that equates to... 96GB? Can the 64GB can allocate enough?
Otherwise, the requirement is basically a 5090. You can stuff it into 32GB as an exl3.
Note that it is going to be slow on Macs, being a dense 72B model.
Good! An MoE.
I can tell you from experience all Qwen models are terrible past 32K. What's more, going over 32K, you have to run them in a special "mode" (YaRN) that degrades performance under 32K. This is particularly bad in vllm, as it does not support dynamic YaRN scaling.
Also, you lose a lot of quality with FP8/AWQ quantization unless it's native FP8 (like deepseek). Exllama and ik_llama.cpp quants are much higher quality, and their low batch performance is still quite good. Also, VLLM has no good K/V cache quantization (its FP8 destroys quality), while llama.cpp's is good, and exllama's is excellent, making it less than ideal for >16K. Its niche is more highly parallel, low context size serving.
Honestly, you should be set now. I can get 16+ t/s with high context Hunyuan 70B (which is 13B active) on a 7800 CPU/3090 GPU system with ik_llama.cpp. That rig (8 channel DDR5, and plenty of it, vs my 2 channels) should at least double that with 235B, with the right quantization, and you could speed it up by throwing in 2 more 4090s. The project is explicitly optimized for your exact rig, basically :)
It is poorly documented through. The general strategy is to keep the "core" of the LLM on the GPUs while offloading the less compute intense experts to RAM, and it takes some tinkering. There's even a project to try and calculate it automatically:
https://github.com/k-koehler/gguf-tensor-overrider
IK_llama.cpp can also use special GGUFs regular llama.cpp can't take, for faster inference in less space. I'm not sure if one for 235B is floating around huggingface, I will check.
Side note: I hope you can see why I asked. The web of engine strengths/quirks is extremely complicated, heh, and the answer could be totally different for different models.