nocteb

joined 10 months ago
[–] [email protected] 3 points 2 months ago* (last edited 2 months ago)

Look into setting up the "continue" plugin in vs code. It supports an ollama backend and can even do embeddings if setup correctly. That means it will try to select files itself based on your question which helps with prompt size. Here is a link to get started, you might need to choose smaller models with your card.

https://ollama.com/blog/continue-code-assistant

[–] [email protected] 2 points 2 months ago (1 children)

Have you tried to play with one without thumbs?

[–] [email protected] 26 points 2 months ago

It's also way easier to just stop digging up coal instead of inefficiently trying to get the exhaust from burning it partially back underground.

[–] [email protected] 3 points 3 months ago

Voting out fascists tends to not work well.

[–] [email protected] 13 points 5 months ago

You're holding it wrong.

[–] [email protected] 9 points 5 months ago

Who needs all that science anyway.

[–] [email protected] 26 points 5 months ago (1 children)

Text to metal

[–] [email protected] 18 points 6 months ago (1 children)
[–] [email protected] 3 points 9 months ago

That would make heads recursive.