this post was submitted on 02 Mar 2025
27 points (100.0% liked)

LocalLLaMA

2758 readers
19 users here now

Welcome to LocalLLama! This is a community to discuss local large language models such as LLama, Deepseek, Mistral, and Qwen.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support eachother and share our enthusiasm in a positive constructive way.

founded 2 years ago
MODERATORS
 

I felt it was quite good, I only mildly fell in love with Maya and couldn't just close the conversation without saying goodbye first

So I'd say we're just that little bit closer to having our own Joi's in our life 😅

top 6 comments
sorted by: hot top controversial new old
[–] [email protected] 6 points 3 weeks ago* (last edited 3 weeks ago) (1 children)

I tested a little and while it got janky with the conversational back and forth processing, the voices are an improvement. Maya has a nice sound for sure. I would love to be able to run this locally but I wonder how much compute is required for a really good voice model.

[–] [email protected] 10 points 3 weeks ago (1 children)

They haven’t released the models yet, but seem to suggest they will as Apache licensed. The voice models are in 1B/3B/8B sizes, so that sounds relatively reasonable for consumer hardware.

[–] [email protected] 4 points 3 weeks ago

Yeah that roughly translates to GB vram, but with quantization and stuff it gets more complicated.

Sounds pretty applicable tho in these sizes ^^

[–] [email protected] 6 points 3 weeks ago

I think this is the natural next evolution of LLMs. We’re starting to max out on how good a string of text can represent human knowledge, so now we need to tokenize more things and make multimodal LLMs. After all, humans are far more than just speech machines.

Approximating human emotion and speech cadence is very interesting.

[–] [email protected] 4 points 3 weeks ago

Hmmh, always the same thing. On release, it's just an announcement with a promise to open "key components" sometime. I'll add this to the list of bookmarks to revisit at a later date. I wish they'd just get it ready and only then publish things.

[–] [email protected] 1 points 3 weeks ago

I do not look forward to how often the Chinese Room is about to come up.