this post was submitted on 08 Dec 2023
13 points (100.0% liked)

LocalLLaMA

2747 readers
5 users here now

Welcome to LocalLLama! This is a community to discuss local large language models such as LLama, Deepseek, Mistral, and Qwen.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support eachother and share our enthusiasm in a positive constructive way.

founded 2 years ago
MODERATORS
 

Early speculation is that it's an MoE (mixture of experts) of 8 7b models, so maybe not earth shattering like their last release but highly intriguing, will update with more info as it comes out

top 2 comments
sorted by: hot top controversial new old
[โ€“] Mixel@feddit.de 8 points 1 year ago (1 children)

Honestly its such a good idea to share models via p2p it saves so much bandwidth Ofc there should still be a ddl for preservation but still

The only concern I had was my god is it a lot of faith to put in this random twitter, hope they never get hacked lol, but otherwise yes it's a wonderful idea, would be a good feature for huggingface to speed up downloads/uploads