this post was submitted on 09 Jul 2025
10 points (100.0% liked)

LocalLLaMA

3357 readers
4 users here now

Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.

Rules:

Rule 1 - No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.

Rule 2 - No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.

Rule 3 - No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.

Rule 4 - No implying that models are devoid of purpose or potential for enriching peoples lives.

founded 2 years ago
MODERATORS
 

I have an unused dell optiplex 7010 i wanted to use as a base for an interference rig.

My idea was to get a 3060, a pci riser and 500w power supply just for the gpu. Mechanically speaking i had the idea of making a backpack of sorts on the side panel, to fit both the gpu and the extra power supply since unfortunately it's an sff machine.

What's making me weary of going through is the specs of the 7010 itself: it's a ddr3 system with a 3rd gen i7-3770. I have the feeling that as soon as it ends up offloading some of the model into system ram is going to slow down to a crawl. (Using koboldcpp, if that matters.)

Do you think it's even worth going through?

Edit: i may have found a thinkcenter that uses ddr4 and that i can buy if i manage to sell the 7010. Though i still don't know if it will be good enough.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 1 day ago* (last edited 1 day ago) (1 children)

I have the feeling that as soon as it ends up offloading some of the model into system ram is going to slow down to a crawl.

Then don't offload! Since its 3000 series, you can run an exl3 with a really tight quant.

For instance, Mistral 24B will fit in 12GB with no offloading at 3bpw, somewhere in the quality ballpark of an Q4 GGUF: https://cdn-uploads.huggingface.co/production/uploads/6383dc174c48969dcf1b4fce/tfIK6GfNdH1830vwfX6o7.png

It's especially good for long context, since exllama's KV cache quantization is so good.

You can still use kobold.cpp, but you'll have to host it via an external endpoint like TabbyAPI. Or you can use croco.cpp (a fork of kobold.cpp) with your own ik_llama.cpp trellis-quantized GGUF (though you'll have to make that yourself since they aren't common... it's complicated, heh).

Point being that simply having an ampere (3000 series RTX) card can increase efficiency massively over a baseline GGUF.

[–] [email protected] 2 points 1 day ago* (last edited 1 day ago) (1 children)

I'll have to check exllama once i build the system, if it can fit a 24B model in 12 gb. It should give me some leeway for 13B ones. Though i feel like i'll need to quantize to exl3 myself for the models i use. Worth a try on a Container though.

Thanks for the tip.

[–] [email protected] 2 points 1 day ago* (last edited 1 day ago) (1 children)

You can definitely quantize exl3s yourself; the process is vram light (albeit time intense).

What 13B are you using? FYI the old Llama2 13B models don’t use GQA, so even their relatively short 4096 context takes up a lot of vram. Newer 12Bs and 14Bs are much more efficient (and much smarter TBH).

[–] [email protected] 2 points 9 hours ago (1 children)

right now i'm hopping between nemo finetunes to see how they fare. i think i only ever used one 8B model from Llama2, the rest is been all Llama 3 and maybe some solar based ones. unfortunately i have yet to properly dig into the more technical side of llms due to time contraints.

the process is vram light (albeit time intense)

so long as it's not interactive i can always run it at night and make it shut off the rig when it's done. power here is cheaper at night anyways :-)

thanks for the info (and sorry for the late response, work + cramming for exams turned out to be more brutal than expected)

[–] [email protected] 2 points 5 hours ago* (last edited 4 hours ago)

Yeah it’s basically impossible to keep up with new releases, heh.

Anyway, Gemma 12B is really popular now, and TBH much smarter than Nemo. You can grab a special “QAT” Q4_0 from Google (that works in kobold.cpp, but fits much more context with base llama.cpp) with basically the same performance as unquantized, would highly recommend that.

I'd also highly recommend trying 24B when you get the rig! It’s so much better than Nemo, even more than the size would suggest, so it should still win out even if you have to go down to 2.9 bpw, I’d wager.

Qwen3 30B A3B is also popular now, and would work on your 3770 and kobold.cpp with no changes (though there are speed gains to be had with the right framework, namely ik_llama.cpp)

One other random thing, some of kobold.cpps sampling presets are very funky with new models. I’d recommend resetting everything to off, then start with like 0.4 temp, 0.04 MinP, 0.02/1024 rep penalty and 0.4 DRY, not the crazy high temp sampling they normally use, with newer models than llama2.

I can host specific model/quantization on the kobold.cpp API to try if you want, to save tweaking time. Just ask (or PM me, as replies sometimes don’t send notifications).

Good luck with exams! No worries about response times, /c/localllama is a slow, relaxed community.