this post was submitted on 02 Jul 2025
285 points (100.0% liked)
Fuck AI
3376 readers
1021 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
LocalLMs run pretty good on a 12GB RTX 2060. They're pretty cheap, if a bit rare now.
So 12GB is what you need?
Asking because my 4GB card clearly doesn't cut it 🙍🏼♀️
4GB card can run smol models, bigger ones require an nvidia and lots of system RAM, and performance will be proportionally worse by VRAM / DRAM usage balance.
Big models work great on macbooks or AMD GPUs or AMD APUs with unified memory