this post was submitted on 18 Feb 2025
20 points (100.0% liked)
LocalLLaMA
2748 readers
11 users here now
Welcome to LocalLLama! This is a community to discuss local large language models such as LLama, Deepseek, Mistral, and Qwen.
Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.
As ambassadors of the self-hosting machine learning community, we strive to support eachother and share our enthusiasm in a positive constructive way.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I guess if I get REALLY bored, I might do a fresh install and load up legacy drivers just to see what the performance is like with the old cards. It would be interesting to see how they stack up to the Vega APU.
I'm not going to actually use these cards, just trying them out for the heck of it.
I think you may be able to use a podman container and pass the gpu over. It will for sure be easier than reinstalling .