this post was submitted on 02 Jul 2025
285 points (100.0% liked)

Fuck AI

3376 readers
1021 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 5 points 5 days ago (1 children)

LocalLMs run pretty good on a 12GB RTX 2060. They're pretty cheap, if a bit rare now.

[–] [email protected] 1 points 4 days ago (1 children)

So 12GB is what you need?

Asking because my 4GB card clearly doesn't cut it 🙍🏼‍♀️

[–] [email protected] 2 points 4 days ago (1 children)

4GB card can run smol models, bigger ones require an nvidia and lots of system RAM, and performance will be proportionally worse by VRAM / DRAM usage balance.

[–] [email protected] 4 points 4 days ago

require an nvidia

Big models work great on macbooks or AMD GPUs or AMD APUs with unified memory