this post was submitted on 20 Mar 2025
24 points (100.0% liked)

Selfhosted

44718 readers
1737 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

cross-posted from: https://lemmy.world/post/27088416

This is an update to a previous post found at https://lemmy.world/post/27013201


Ollama uses the AMD ROCm library which works well with many AMD GPUs not listed as compatible by forcing an LLVM target.

The original Ollama documentation is wrong as the following can not be set for individual GPUs, only all or none, as shown at github.com/ollama/ollama/issues/8473

AMD GPU issue fix

  1. Check your GPU is not already listed as compatibility at github.com/ollama/ollama/blob/main/docs/gpu.md#linux-support
  2. Edit the Ollama service file. This uses the text editor set in the $SYSTEMD_EDITOR environment variable.
sudo systemctl edit ollama.service
  1. Add the following, save and exit. You can try different versions as shown at github.com/ollama/ollama/blob/main/docs/gpu.md#overrides-on-linux
[Service]
Environment="HSA_OVERRIDE_GFX_VERSION=10.3.0"
  1. Restart the Ollama service.
sudo systemctl restart ollama
top 7 comments
sorted by: hot top controversial new old
[–] [email protected] 3 points 18 hours ago (1 children)

I would run it in a Podman container with the GPU passed though

[–] [email protected] 4 points 16 hours ago (2 children)

Why not throw that into a VM with VFIO passthrough, plug the GPU in via an external dock and if we are already at abstracting shit away for unnecessary complexity and non-compatibility do all that on windows?

[–] [email protected] 2 points 14 hours ago (1 children)

Because that is way more complicated?

It is really easy to run ollama in a container.

[–] [email protected] 1 points 8 hours ago (1 children)

Really easy to start running it

Then everything goes wrong, from configuration over logs to cuda. And the worst fucking debugging ever.

[–] [email protected] 1 points 2 hours ago (1 children)

On Linux you can download Alpaca. I think it is CPU only but it is simpler.

[–] [email protected] 1 points 2 hours ago

Ollama is simple too, I meant that containers make everything a nightmare to maintain.

[–] [email protected] 2 points 16 hours ago

Nested VMs stay performant about three levels deep, so do that as well.