this post was submitted on 12 Feb 2025
59 points (100.0% liked)
LocalLLaMA
2792 readers
8 users here now
Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.
Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.
As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
32GB of VRAM for a consumer price would certainly help. I'm a bit concerned with the memory bandwidth, seems way less than what modern Nvidia cards do... But if it's priced competitively, this might be a good choice to do a lot of AI tasks at home, especially LLM inference.