this post was submitted on 06 Sep 2023
26 points (93.3% liked)
LocalLLaMA
2825 readers
29 users here now
Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.
Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.
As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
You might also want to have a look at Koboldcpp or llama.cpp performance with ROCm. The LLMs seem mainly to be constrained by memory bandwith anyways. And not raw compute performance.
Will do. Ty