this post was submitted on 25 Feb 2025
13 points (100.0% liked)

Free Open-Source Artificial Intelligence

3301 readers
3 users here now

Welcome to Free Open-Source Artificial Intelligence!

We are a community dedicated to forwarding the availability and access to:

Free Open Source Artificial Intelligence (F.O.S.A.I.)

More AI Communities

LLM Leaderboards

Developer Resources

GitHub Projects

FOSAI Time Capsule

founded 2 years ago
MODERATORS
 

Hello, I am currently using codename goose as an AI client to proofread and help me with coding. I have it setup towards Googles Gemini, however I find myself quickly running out of tokens with large files. I was wondering if there are any easy way to self host an AI with similar capabilites but still have access to read and write files. I've tried both ollama and Jan, but neither have access to my files. Any recommendations?

top 7 comments
sorted by: hot top controversial new old
[–] nocteb@feddit.org 3 points 1 month ago* (last edited 1 month ago)

Look into setting up the "continue" plugin in vs code. It supports an ollama backend and can even do embeddings if setup correctly. That means it will try to select files itself based on your question which helps with prompt size. Here is a link to get started, you might need to choose smaller models with your card.

https://ollama.com/blog/continue-code-assistant

[–] JASN_DE@feddit.org 1 points 1 month ago (1 children)

with similar capabilities

What's your budget?

[–] youreusingitwrong@programming.dev 2 points 1 month ago (1 children)

Zero, as said I'd prefer to self host.

[–] JASN_DE@feddit.org 3 points 1 month ago (1 children)

What hardware do you have available then?

[–] youreusingitwrong@programming.dev 1 points 1 month ago (1 children)

Just a 1080, though it handles just fine with 7b models, could also work with a 14b probably.

[–] webghost0101@sopuli.xyz 2 points 1 month ago (1 children)

With sincere honesty i doubt a 7B model will grant you much coherent/usefull results. 14b won’t either.

I can run deepseek 30b on a 4070ti super and i am very not impressed. I can do more but its too slow. 14b is optimal speed size balance.

I am used to clause opus pro though which is one of the best.

You are 100% allowed to proof me wrong. In fact i hope you do and build something small and brilliant but i personally recommend adjusting expectations and upgrading that card.

Do you think a 24GB card like the 7900 XTX could run Mistral Small? TBH that card is nowhere to be found right now