this post was submitted on 06 Sep 2023
26 points (93.3% liked)

LocalLLaMA

2825 readers
29 users here now

Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.

founded 2 years ago
MODERATORS
26
How usable are AMD GPUs? (lemmy.dbzer0.com)
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]
 

Heyho, I'm currently on a RTX3070 but want to upgrade to a RX 7900 XT

I see that AMD installers are there, but is it all smooth sailing? How well do AMD cards compare to NVidia in terms of performance?

I'd mainly use oobabooga but would also love to try some other backends.

Anyone here with one of the newer AMD cards that could talk about their experience?

EDIT: To clear things up a little bit. I am on Linux, and i'd say i am quite experienced with it. I know how to handle a card swap and i know where to get my drivers from. I know of the gaming performance difference between NVidia and AMD. Those are the main reasons i want to switch to AMD. Now i just want to hear from someone who ALSO has Linux + AMD what their experience with Oobabooga and Automatic1111 are when using ROCm for example.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 3 points 2 years ago (1 children)

Oops. I wasn't looking at the community just my main feed. Ok, so from what I understand the amd installer is a bit of a pain on Linux. If you're on windows it's probably a different story.

[–] [email protected] 1 points 2 years ago (1 children)

I am on Linux, but I can live with a painful install. I wanted to hear if it performs on par with nvidia

[–] [email protected] 1 points 2 years ago* (last edited 2 years ago) (2 children)

Again. Apologies for the confusion. I had thought my initial comment was on a gaming community. Here is puget systems benchmarks and they don't look great - https://www.pugetsystems.com/labs/articles/stable-diffusion-performance-nvidia-geforce-vs-amd-radeon/#Automatic_1111

"Although this is our first look at Stable Diffusion performance, what is most striking is the disparity in performance between various implementations of Stable Diffusion: up to 11 times the iterations per second for some GPUs. NVIDIA offered the highest performance on Automatic 1111, while AMD had the best results on SHARK, and the highest-end GPU on their respective implementations had relatively similar performance."

[–] [email protected] 2 points 2 years ago (2 children)

Sorry, not trying to come at you, but I’m just trying to provide a bit of fact checking. In this link, they tested on Windows which would have to be using DirectML which is super slow. Did Linus Tech Tips do this? Anyway, the cool kids use ROCm on Linux. Much, much faster.

[–] [email protected] 1 points 2 years ago

Haha, you're not, I definitely stumbled into this. These guys mainly build edit systems for post companies, so they stick to windows. Good to know about ROCm, got something to read up on.

[–] [email protected] 1 points 2 years ago

Yeah that was what i was worried about after reading the article; I've heard about the different backends...

Do you have AMD + Linux + Auto111 / Ooobabooga? Can you give me some real-life feedback? :D

[–] [email protected] 1 points 2 years ago

No worries

Interesting article Never heard about SHARK, seems interesting then