this post was submitted on 12 Feb 2025
59 points (100.0% liked)

LocalLLaMA

2792 readers
8 users here now

Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.

founded 2 years ago
MODERATORS
 

One might question why an RX 9070 card would need so much memory, but increased capacity can serve purposes beyond gaming, such as Large Language Model (LLM) support for AI workloads. Additionally, it’s worth noting that RX 9070 cards will use 20 Gbps memory, much slower than the RTX 50 series, which features 28-30 Gbps GDDR7 variants. So, while capacity may increase, bandwidth likely won’t.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 12 points 1 month ago (1 children)

One thing that might also require more memory in the future is to do both at the same time (keep LLMs loaded while gaming).

[–] [email protected] 2 points 1 month ago (1 children)

There's a Skyrim mod that uses (optionally) a local LLM to chat with NPCs. In addition you can also use speech to text and tts to just speak with the NPCs.

However, there's a lot of lag to make it a proper experience.

[–] [email protected] 3 points 1 month ago

I wasn't so much thinking about use in games as just use in other software on the system that keeps running while the game is in use. The game can coordinate its own resource usage but independent software has a harder time with that.