DavidGarcia

joined 2 years ago
[–] [email protected] 28 points 2 weeks ago

latex exists to make your text look more professional, not to make you more productive, duh.

pig lipstick

[–] [email protected] 2 points 2 weeks ago

ah I see, that makes sense somewhere probably

[–] [email protected] 6 points 2 weeks ago

relationship with parents improves since I am finally passionate about something and applying myself

yeah right, lies, fabrications

your parents are supposed to castigate you for wasting your time with doing something you like doing

that's what real and good parents do

[–] [email protected] 4 points 2 weeks ago (1 children)

ok rude. you're creepy, miss author of this article

[–] [email protected] 8 points 2 weeks ago (3 children)

okay so the opposing sides of a cube die always add up to 7 right? Does that mean the other side of that 9 has a -2?

or is it not a cube and actually has >= 9 faces?

[–] [email protected] 22 points 2 weeks ago (6 children)

anyone who doesn't understand lemmy after using reddit must be a real pea brain

[–] [email protected] 1 points 3 weeks ago

that one makes me so uncomfortable, similar to Made in Abyss, worst horror show ever

[–] [email protected] 12 points 3 weeks ago (7 children)

didn't the show say elves only live a few thousand years? I was under the impression she's like 1000-2000 years old and will die in another 1000-2000 or something

[–] [email protected] 5 points 3 weeks ago (1 children)
[–] [email protected] 25 points 3 weeks ago (3 children)

Investors poured completely insane amounts of money into thd endless money pit that is ClosedAI. then they realize betting everything on one horse was really stupid, since they have zero competitve advantage.

now they try to get as much loot off the sinking ship as possible lmao

they'll probably exit scam soon

[–] [email protected] 2 points 4 weeks ago

Q4 will give you like 98% of quality vs Q8 and like twice the speed + much longer context lengths.

If you don't need the full context length, you can try loading the model at shorter context length, meaning you can load more layers on the GPU, meaning it will be faster.

And you can usually configure your inference engine to keep the model loaded at all times, so you're not loosing so much time when you first start the model up.

Ollama attempts to dynamically load the right context lenght for your request, but in my experience that just results in really inconsistent and long time to first token.

The nice thing about vLLM is that your model is always loaded, so you don't have to worry about that. But then again, it needs much more VRAM.

[–] [email protected] 5 points 1 month ago (2 children)

In my experience anything similar to qwen-2.5:32B comes closest to gpt-4o. I think it should run on your setup. the 14b model is alright too, but definitely inferior. Mistral Small 3 also seems really good. anything smaller is usually really dumb and I doubt it would work for you.

You could probably run some larger 70b models at a snails pace too.

Try the Deepseek R1 - qwen 32b distill, something like deepseek-r1:32b-qwen-distill-q4_K_M (name on ollama) or some finefune of it. It'll be by far the smartest model you can run.

There are various fine tunes that remove some of the censorship (ablated/abliterated) or are optimized for RP, which might do better for your use case. But personally haven't used them so I can't promise anything.

 
 
25
submitted 2 years ago* (last edited 2 years ago) by [email protected] to c/[email protected]
 

Here is the same mushroom from my last post but 3 days later and photographed from the other side.

I feels like a rock with a rubbery coating.

It seems to be growing what looks like roots?

It doesn't seem like it's dying off, rather it seems to be growing stronger.

It's growing under a pear tree, maybe it's feeding off of dead roots?

I'm thinking maybe a ganoderma?

Here is a closeup and one from the same angle as my last post:

Edit: Here it is 3 days ago:

 
 

Does anyone have any idea what this is?

I thought it was a moldy pear (it's under a pear and apple tree, next to some sage and valerian), but it's hardish and attached to the ground.

 

Infinity had a multireddit feature that let you create your own multireddits. I think it was client side, because I can use it without an account? Will this feature ever come back?

I know there are no multicommunities on Lemmy yet, but it would be cool if we could do it client-side.

 

Any time I try to apply filters from the home screen, it only filters Subscribed. Is there a way to filter on All or Local? I tried switching the tab order, but that didn't work.

Thanks for the port btw.

view more: ‹ prev next ›