borokov

joined 5 months ago
[–] [email protected] 12 points 2 days ago

Fool me once, shame on you. Fool me twice, shame on me.

[–] [email protected] 1 points 2 days ago

Never forget that deep inside LLM training data, their is all the tweet history of pussyslayer69.

[–] [email protected] 3 points 2 days ago
[–] [email protected] 29 points 3 days ago (2 children)

GPT started answered correctly since around gpt3.5. But smaller LLM generally responds rabbit eggs are around 5 to 8 cm and are smaller than cows one 🤣

[–] [email protected] 70 points 3 days ago (13 children)

To test level of stupidity of AI chat bots, I ask them the size of a rabbit egg, and if it is bigger or smaller that a cow eggs.

[–] [email protected] 28 points 1 week ago (1 children)

In the early 2ks, computer were ugly grey box with noisy fan and a hard drive that gave the impression a cockroach colony were trying to escape your case. I wanted to build a silent computer to watch Divx movies from my bed, but as a broke teen, I just had access to disposed hardwares I could find there and there.

I dismantled a power supply, stuck mosfets to big mother fucking dissipator, and I had a silent power supply. I put another huge industrial dissipator on CPU (think it was an AMD k6 500Mhz) and had fanless cooling. Remained the hard drive.

Live CD/USB weren't common at that time. I've discovered a live CD distrib (I think it was Knoppix) that could run entirely from RAM.

I removed hard drive, boot on live distrib, then replace CD by my Divx and voila.

Having a fanless-harddriveless computer was pure science fiction for me and my friends at that time.

[–] [email protected] 2 points 2 weeks ago

I gave a try to ollama.nvim, but I'm not conviced (not by the plugin, but by using LLM directly in IDE). Because of security reasons, I cannot send code to public LLMs, so I have to either use my company's internal LLM (GPT4o), but which just have a front end, no API. Either I have to use local LLM through ollama for ex.

I tried several models, but they are to slow or too dumb. In the end, when I need help, I copy/past code into LLM portal front end.

[–] [email protected] 2 points 2 weeks ago

Make a computer for kids by installing PrimTux or other kid oriented distro.

[–] [email protected] 3 points 3 weeks ago

Dunning-Kruger effect.

Lots of people now think they can be developpers because they did a shitty half working game using vibe coding.

Would you trust a surgeon that rely on ChatGPT ? So why sould you trust LLM to develop programs ? You know that airplane, nuclear power plants, and a LOT of critical infrastructure rely on programs, right ?

[–] [email protected] 1 points 4 weeks ago

neovim, because it's much nicer and user friendly than vim.

[–] [email protected] 23 points 4 weeks ago

Why the fuck would I drive a watch ?

[–] [email protected] 17 points 1 month ago

The point of self hosting is not to engage people going in your server. The point of self hosting is to have control over your infrastructure. It's like renting or buying a home.

When you buy a home, you don't complain that no one wants to sleep in your home 😆

view more: next ›