Sims

joined 2 years ago
[–] Sims@lemmy.ml 2 points 3 hours ago

It can be a bit difficult with these 'what if we remove this fundamental force' questions, because they are so fundamental that the rule screws up further reasoning about the situation, but:

Assuming bonds in a body just 'disappeared' by magic: Instant decompression would happen at molecular level.

There would be no puddle or even visible dust. All molecules - mostly H and O - in our body would instantly be de-constructed in to individual atoms - in effect turning into a compressed gas, and I guess that the lighter elements would 'boil off' so fast that our whole body of compressed gas would explode rather violently.

[–] Sims@lemmy.ml 1 points 4 hours ago (1 children)
[–] Sims@lemmy.ml 2 points 2 days ago* (last edited 2 days ago)

Been enjoying Linux for ~25 years, but have never been happy with how it handled low memory situations. Swapping have always killed the system, tho it have improved a little. It's been a while since I've messed with it. I've buckled up and are using more ram now, but afair, you can play with:

(0. reduce running software, and optimize them for less memory, yada yada)

  1. use a better OOM (out of memory) manager that activates sooner and more gracefully. Search in your OS' repository for it.
  2. use zram as a more intelligent buffer and to remove same (zero) pages. It can lightly compress lesser used memory pages and use a partition backend for storing uncompressible pages. You spend a little cpu, to minimize swap, and when needed, only swap out what can't be compressed.
  3. play with all the sysctl vm settings like swappiness and such, but be aware that there's SO much misinformation out there, so seek the official kernel docs. For instance, you can adapt the system to swap more often, but in much smaller chunks, so you avoid spending 5 minutes to hours regaining control - the system may get 'sluggish', but you have control.
  4. use cgroups to divide you resources, so firefox/chrome (or compilers/memory-hogs) can only use X amount before their memory have to swap out (if they don't adapt to lower mem conditions automatically). That leaves a system for you that can still react to your input (while ff/chrome would freeze). Not perfect, tho.
  5. when gaming, activate a low-system mode, where unnecessary services etc are disabled. I think there's a library/command that helps with that (and raise priority etc), but forgot its name.

EDIT: 6. when NOT gaming, add some of your vram as swap space. Its much faster than your ssd. Search github or your repository for 'vram cache' or something like that. It works via opencl, so everyone with dedicated vram can use it as super fast cache. Perhaps others can remember the name/link ?

Something like that anyway, others will know more about each point.

Also, perhaps ask an AI to create a small interface for you to fiddle with vm settings and cgroups in an automated/permanent way ? just a quick thought. Good luck.

[–] Sims@lemmy.ml 3 points 1 week ago (1 children)

Agree. I also shift between them. As the bare minimum, I use a thinking model to 'open up' the conversation, and then often continue with a normal model, but it certainly depends on the topic.

Long ago we got 'routellm' I think, that routed a request depended on its content, but the concept never got traction for some reason. Now it seems that closedai and other big names are putting some attention to it. Great to see DeepHermes and other open players be in front of the pack.

I don't think it will take long before we have the agentic framework do the activation of different 'modes' of thinking dependent on content/context, goals etc. It would be great if a model can be triggered into several modes in a standard way.

[–] Sims@lemmy.ml 2 points 8 months ago

There's a cheap zbtlink openwrt wifi6 3000Mbps 'z8101ax-d' on AliE for around 50 $. (https://www.aliexpress.com/w/wholesale-zbtlink-openwrt.html?spm=a2g0o.productlist.search.0)

I don't know how long, and haven't tried the product, but maybe some here have tried it ?

[–] Sims@lemmy.ml 22 points 9 months ago (1 children)

Seems the only 'dark side' was that he was caught..

[–] Sims@lemmy.ml 4 points 9 months ago (1 children)

A laymans opinion on the challenge: Waves lose energy, and the exact placement of antennas will matter. I don't know what the mechanism is called, but we don't place wind turbines right next to each other. That is afaik because each turbine takes some of the energy out of a larger chunk of the wind-wave in an 'bubble' around it, so we place them with optimal distance according to efficiency of that mechanism. If I'm right the effect will probably be minimal. Anyway, just a stab at an interesting thought..

[–] Sims@lemmy.ml 17 points 9 months ago

Jeebus man. This car is probably the ugliest car after the Homer Simpson design. The concept is even dumber, and it's hard for me not to look down on buyers/owners of such ugly fanboi thrash-ware..

[–] Sims@lemmy.ml 12 points 9 months ago

There's imho no stupid questions regarding personal cyber-security. There are only things we don't know yet.

[–] Sims@lemmy.ml 3 points 9 months ago (1 children)

I'm old. Once upon a time 'screen savers' were used for ..saving screens, I swear it's the truth :-) I would've bet money on screensavers disappearing when CRT monitors did, but that certainly did not happen. It exploded. I kind of expect that someone by now have created a screensaver ..plugin for another screensaver..

Not picking on you, just feeling old suddenly. I tried searching for 2024 all-time insane screensavers but only found this 13yo one from vsauce: iv.melmac.space/watch?v=zwX95UaKCRg

..but I'm curios what have happened in 13 years, so if any lurkers know better search-fu, please add..

Sharing is caring <3

[–] Sims@lemmy.ml 12 points 10 months ago (7 children)

I read a comment somewhere that Stremio uploads like a normal client. Just a comment oc, but it should be easy to check for a network savvy reader. It may be that the plugin does it, dunno.

 

I am planning my first ai-lab setup, and was wondering how many tokens different AI-workflows/agent network eat up on an average day. For instance talking to an AI all day, have devlin running 24/7 or whatever local agent workflow is running.

Oc model inference speed and type of workflow influences most of these networks, so perhaps it's easier to define number of token pr project/result ?

So I were curious about what typical AI-workflow lemmies here run, and how many tokens that roughly implies on average, or on a project level scale ? Atmo I don't even dare to guess.

Thanks..

view more: next ›