this post was submitted on 31 Mar 2024
17 points (100.0% liked)
LocalLLaMA
2849 readers
18 users here now
Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.
Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.
As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
You're probably going to run into the problem that people didn't anticipate your strategy if you try to run a model on a GPU with way more memory than the host system. I'm not sure many execution frameworks can go straight from disk to GPU RAM. Also, storage speed for loading the model might be an issue on an SOC that boots off e.g. an SD card.
An eGPU dock should do CUDA just as well as an internal GPU, as far as I know. But you would need the drivers installed.