this post was submitted on 15 Mar 2025
7 points (100.0% liked)

LocalLLaMA

2739 readers
41 users here now

Welcome to LocalLLama! This is a community to discuss local large language models such as LLama, Deepseek, Mistral, and Qwen.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support eachother and share our enthusiasm in a positive constructive way.

founded 2 years ago
MODERATORS
 

I don't care a lot about mathematical tasks, but code intellingence is a minor preference but the most anticipated one is overall comprehension, intelligence. (For RAG and large context handling) But anyways any benchmark with a wide variety of models is something I am searching for, + updated.

top 4 comments
sorted by: hot top controversial new old
[–] [email protected] 3 points 6 days ago* (last edited 6 days ago) (1 children)

The average of all different benchmarks can be thought of as a kind of 'average intelligence', though in reality its more of a gradient and vibe type thing.

Many models are "benchmaxxed" trained to answer the exact kinds of questions the test asked which often makes the benchmarks results unrelated to real world use case checks. Use them as general indicators but not to be taken too seriously.

All model families are different in ways that you only really understand by spending time with them. Don't forget to set the rigt chat template and correct sample range values as needed per model. Openleaderboard is a good place to start.

[–] [email protected] 3 points 6 days ago (1 children)

i use pageassist with Ollama

[–] [email protected] 2 points 5 days ago* (last edited 5 days ago) (1 children)

Cool, page assist looks neat I'll have to check it out sometimes. My llm engine is kobold.cpp, and I just user the openwebui in internet browser to connect.

Sorry I don't really have good suggestions for you beyond to just try some of the more popular 1-4bs in a very high quant if not full f8 and see which works best for your use case.

Llama 4b, mistral 4b, phi-3-mini, tinyllm 1.5b, qwen 2-1.5b, ect ect. I assume you want a model with large context size and good comprehension skills to summarize youtube transcripts and webpage articles? At least I think thats what the add-on you mentioned suggested was its purpose.

So look for models with those things over ones that try to specialize in a little bit of domain knowledge.

[–] [email protected] 2 points 4 days ago

I checked mostly all of em out from the list, but 1b models are generally unusable for RAG.