I figured most datacenter customers wouldn't be running pcie cards, they'd be running OAM for higher density.
LocalLLaMA
Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.
Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.
As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.
Rules:
Rule 1 - No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.
Rule 2 - No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.
Rule 3 - No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.
Rule 4 - No implying that models are devoid of purpose or potential for enriching peoples lives.
They are a scam.
But:
- Using gaming GPUs in datacenters is in breach of Nvidia's license for the GPU.
...So, yes, they are a scam. But:
- Datacenter is all about batched LLM performance, as the vram pools are bigger than models. In reality, one can get better parallel token/s on an H100 than you can on 2x RTX Pros or a few 5090s, especially with bigger models that take advantage of NVLink.
Well, I wouldn't call them a "scam". They're meant for a different use-case. In a datacenter, you also have to pay for rack space and all the servers which accomodate all the GPUs. And you can now pay for 32 times as many servers with Radeon 9060XT or you buy H200 cards. Sure, you'll pay 3x as much for the cards itself. But you'll save on the amount of servers and everything that comes with it, hardware cost, space, electricity, air-con, maintenance... Less interconnect makes everything way faster...
Of course at home different rules apply. And it depends a bit how many cards you want to run, what kind of workload you have... If you're fine with AMD or you need Cuda...
Yeah i should have specified for at home when saying its a scam, i honestly doubt the companies that are buying thousands of B200s for datacenters are even looking at their pricetags lmao.
Anyway the end goal is to run something like Qwen3-235B at fp8, with some very rough napkin math 300GB vram with the cheapest option the 9060XT comes down at €7126 with 18 cards, which is very affordable. But ofcourse that this is theoretically possible does not mean it will actually work in practice, which is what im curious about.
The inference engine im using vLLM supports ROCm so CUDA should not be strictly required.
I think there are some posts out there (on the internet / Reddit / ...) with people building crazy rigs with old 3090s or something. I don't have any experience with that. If I were to run such a large model, I'd use a quantized version and rent a cloud server for that.
And I don't think computers can fit infinitely many GPUs. I don't know the number, let's say it's 4. So you need to buy 5 computers to fit your 18 cards. So add a few thousand dollars. And a fast network/interconnect between them.
I can't make any statement for performance. I'd imagine such a scenario might work for MoE models with appropriate design. And for the rest performance is abysmal. But that's only my speculation. We'd need to find people who did this.
Edit: Alternatively, buy a Apple Mac Studio with 512GB of unified RAM. They're fast as well (probably way faster than your idea?) and maybe cheaper. Seems an M3 Ultra Mac Studio with 512GB costs around $10,000. With half that amount, it's only $7,100.