this post was submitted on 16 Mar 2024
78 points (100.0% liked)

LocalLLaMA

2882 readers
44 users here now

Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.

founded 2 years ago
MODERATORS
 

But in all fairness, it's really llama.cpp that supports AMD.

Now looking forward to the Vulkan support!

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 5 points 1 year ago (1 children)

That's cool. I've just recently gotten hold of an interesting Ampere system and it's got an AMD card in it. I must give it a spin.

[–] [email protected] 1 points 1 year ago (1 children)

I was sadly stymied by the fact the rocm driver install is very much x86 only.

[–] [email protected] 2 points 1 year ago

It's improving very fast. Give it a little time.