IsoKiero

joined 2 years ago
[–] [email protected] 2 points 2 weeks ago (3 children)

How much RAM your system has? Zfs is pretty hungry for memory and if you don't have enough it'll have quite significant impact on performance. My proxmox had 7x4TB drives on zfs pool and with 32 gigs of RAM there was practically nothing left for the VMs under heavy i/o load.

I switched the whole setup to software raid, but it's not offically supported by proxmox and thus managing it is not quite trivial.

[–] [email protected] 5 points 2 weeks ago

Robbers roast (rosvopaisti) in Finland. I suppose other countries have something similar, but it's a piece of meat cooked in a ground oven. First dig up a small hole, line it with rocks, keep bonfire going in the hole for couple of hours, scrape the coals out and put meat wrapped in parchment paper, wet newspapers and foil in to the hole, fill it with sand and set up a new bonfire on top of the sand. Throw onions, garlic, carrots and whatever you like to accompany/season the meat while you're at it. Things like potatoes or sweet potatoes doesn't really work as they just turn into a mush, at least unless you individually wrap them, but the process isn't consistent enough, just cook whatever sides you want separately.

With meat include pieces of fat on top of it and season however you like. It's traditionally made out of lamb, but I prefer cow (or moose if it's available). Pork works just fine too. The whole process takes 10-12 hours, so it's not for your wednesday dinner, but it's very much worth the effort.

When the weather is good and you do it right the meat just breaks down and you'll almost need a spoon to eat it. Absolutely delicious. And as you have bonfire going for all day you can cook sausages on a stick and have a 'few' beers while feeding the fire. It's an experience with absolutely delicious food in the end.

Just be careful that you don't pass out on all the beer while cooking and miss the fun part.

[–] [email protected] 13 points 2 weeks ago

Advertisers, stock prices, inverstors and other stuff like that. As in "losing money".

[–] [email protected] 5 points 3 weeks ago

I mostly use battlestar galactica ship names for my own hardware, but it's been mixed with boring '.mydomain.foo' names as well. I should rename a bunch of stuff around and include them in my DNS.

[–] [email protected] 23 points 3 weeks ago

But then you'd need access (and most likely licensing) to the original footage AND you'd need to find someone who even knows what 'film' is in the first place, with equipment and skills to use them. Not too difficult if you're in the business I'd assume, but you can just throw it on AI in an afternoon by an intern and call it a day.

And besides now they have option to market that as 'AI remaster' which I suppose sounds fancy to someone wearing a suit. Who cares about consumer experience anyways.

[–] [email protected] 1 points 1 month ago

The exchanged mails between the IMAP host and the MTA need a unique identifier to organize contents of the DB, and this would not be possible or automatic if your switched the upstream MTA.

It sure is possible. I've copied maildirs over different software, different servers, local copies back to the server and so on. Also if you just rely on your own IMAP server the upstream doesn't matter as fetchmail (or whatever you choose to use) anyways communicates between hosts on their preferred protocols.

Obviously there's a tradeoff since now you're responsible for your backups and maintaining your server, but it can sit nicely on your private LAN with access only locally or via VPN without direct access to the internet. And you don't need MTA to run IMAP server in the first place.

[–] [email protected] 6 points 1 month ago (2 children)

You can run a dovecot (or any IMAP server) where ever you want and use fetchmail to pull data from POP-server into it. There's plenty of discussion and instructions around the web so I won't copy'n'paste them here, search for 'fetchmail dovecot' or something similar.

[–] [email protected] 7 points 1 month ago

I wonder how much money that accounts for

Let's throw some numbers around. Quick google search says that federal workers make $35/hour on average and that there's about 3 million workers. Let's be generous for Elmo and say that only half of the workers received and/or reacted to the email and that they only spent an hour on responding, attending to meetings with colleagues and so on about the case.

So, 1.5 million hours * 35 dollars per hour equals to cool 52,5 million. Adjust numbers on how ever you like. 50-200 million is a big pile of cash, but in the US government scale that's not much above a rounding error. Pretty hefty bill anyways from a single email sent by a guy whose authorization on anything is pretty much just 'trust me bro'.

[–] [email protected] 3 points 1 month ago

I meant that the technology itself is reliable. And you can do self hosting just fine too, I've been doing it since 2010 or so, but running a local smarthost which sends messages via reputable SMTP provider works just fine too. Or even directly interacting with the SMTP provider from all the applications you're running.

[–] [email protected] 20 points 1 month ago (1 children)

"Giving" as in "US military paid them and Elon took credit for it".

[–] [email protected] 18 points 1 month ago (2 children)

Elections are suspended because no one has time to “run an election”.

And also (if I've understood correctly) it's straight up impossible by Ukrainian laws to run elections when martial law is active. They'll hold democratic elections eventually and choose the next president but until then Zelenskyi is sitting president. Things run just like their laws mandate.

In some other country there's memecoin agency operating without any legal oversight with considerable power, maybe look on to that first...

[–] [email protected] 1 points 1 month ago

Advertising spam.

 

So, as the topic says, I'm going to set up a self hosted email service for myself, family and friends. I know that this one is a controversial topic around here, but trust me when I say I know what I'm getting into. I've had a small hosting business for years and I've had my share of issues with microsoft and others, I know how to set things up and keep them running and so on.

However, on the business side we used both commercial solution and a dirt-cheap service with just IMAPS/SMTPS and webmail with roundcube. Commercial one (Kerio Connect, neat piece of software, check it out if you need one) is something I don't want to pay for anymore (even if their pricing is pretty decent, it's still money out from my pocket).

I know for sure I can rely to bog-standard postfix+dovecot+spamassassin -combo, and it will work just fine for plain email. However, I'd really like to have calendar and contacts in the mix as well and as I've only worked with commercial solution for the last few years I'm not up to speed on what the newest toys can offer.

I'm not that strict on anything, but the thing needs to run on linux and it must have the most basic standards supported, like messages stored on maildir-format (simplifies migration to other platform if things change), support for sieve (or other commonly supported protocol) and contacts/calendar need to work with pretty much anything (android, ios, linux, windows, mac...) without extra software on client end (*DAV excluded, those are fine in my books). And obviously the thing needs to work with imaps, smtps, dkim and other necessities, but that should be implied anyways.

I know that things like zimbra, sogo and iredmail exist, but as mentioned, it's been a while since I've played with things like that, so what are your recommendations for setup like this today?

14
submitted 4 months ago* (last edited 4 months ago) by [email protected] to c/[email protected]
 

I'm not the first by any means and my factory is a spaghetti mess with a crapload of problems all around the place. Specially with the last phase I just threw something together, manually fed manufacturers and handheld the factory so that it could finish the last few parts.

I didn't (yet) get all of the achievements (and I don't know if I'll ever get to that last one), but it's completed. Also I've got few things in MAM unopened, didn't even bother to check portals and there's some other content left to explore. With family and all the other things going on in life I mostly had few hours here and there over weekends to play, so it took some time, but it was a fun experience.

I think if I get back to that save file I'll need a fresh start at somewhere else on the map and build things with better planning, eventually replacing the current spaghetti. I started from the plains at south and plenty of the map is still pretty much untouched, apart from random power lines and radar towers dotted around.

I did play on early access too, but stopped on that save once devs announced a feature freeze before 1.0 release a while ago, so I had some idea on what I'm going for and it did really help on how to plan things for the future at least a bit, but eventually pretty much everything I built was too small and cramped.

Either way, as mentioned, it was fun. If any of the devs/team are reading, thank you for your work. I don't have all that much of spare time to play games and this was always fun to start and either just wander around gathering stuff, improving existing things or building new ones.

opinions on end gameIn my opinion story part of the game, specially at the end, was a bit disappointing and left a lot of questions unanswered, but it was still fun and even rewarding to activate the space elevator last time.

 

Hello,

I haven't found much on the internet about an issue on loading game with update 8. I've ran into this so frequently that it's pretty much a feature on my installation where after load I'm in a hub with just a xeno zapper in my inventory.

Loading autosave will work as expected, but obviously takes me back a few minutes. Has anyone experienced the same?

 

I'm not quite sure if electronics fit in with the community, but maybe some of you could point me into right direction with ESPHome and IR transmitter to control my minisplit heatpump at the garage.

The thing is cheapest one I could find (I should've paid more, but that's another story). It's rebranded cheap chinese crap and while vendor advertised that you could control it over wifi I didn't find any information beyond 'use SmartApp to remote control' (or whatever that software was called) but it's nowhere to be found and I don't want to let that thing into internet anyways.

So, IR to the rescue. I had 'infrared remote control module' (like this around and with arduino uno I could capture IR codes from the remote without issues.

But, transmitting those back out seems to be a bit more challenging. I believe I got the configuration in place and I even attempted to control our other heat pump with IR Remote Climate component which should have support out of the box.

I tried to power the IR led straight from nodemcu pin (most likely a bad idea) and via IRFZ44N mosfet (massive overkill, but it's what I had around) from 3.3V rail. The circuit itself seems to work and if I replace IR led with a regular one it's very clear that LED lights up when it should.

However, judging by the amount of IR light I can see trough cellphone camera, it feels like that either the IR LED is faulty (very much a possibility, what you can expect from a 1€ kit) or that I'm driving it wrong somehow.

Any ideas on what's wrong?

 

I think that installation was originally 18.04 and I installed it when it was released. A while ago anyways and I've been upgrading it as new versions roll out and with the latest upgrade and snapd software it has become more and more annoying to keep the operating system happy and out of my way so I can do whatever I need to do on the computer.

Snap updates have been annoying and they randomly (and temporarily) broke stuff while some update process was running on background, but as whole reinstallation is a pain in the rear I have just swallowed the annoyance and kept the thing running.

But now today, when I planned that I'd spend the day with paperwork and other "administrative" things I've been pushing off due to life being busy, I booted the computer and primary monitor was dead, secondary has resolution of something like 1024x768, nvidia drivers are absent and usability in general just isn't there.

After couple of swear words I thought that ok, I'll fix this, I'll install all the updates and make the system happy again. But no. That's not going to happen, at least not very easily.

I'm running LUKS encryption and thus I have a separate boot -partition. 700MB of it. I don't remember if installer recommended that or if I just threw some reasonable sounding amount on the installer. No matter where that originally came from, it should be enough (this other ubuntu I'm writing this with has 157MB stored on /boot). I removed older kernels, but still the installer claims that I need at least 480MB (or something like that) free space on /boot, but the single kernel image, initrd and whatever crap it includes consumes 280MB (or so). So apt just fails on upgrade as it can't generate new initrd or whatever it tries to do.

So I grabbed my ventoy-drive, downloaded latest mint ISO on it and instead of doing something productive I planned to do I'll spend couple of hours at reinstalling the whole system. It'll be quite a while before I install ubuntu on anything.

And it's not just this one broken update, like I mentioned I've had a lot of issues with the setup and at least majority of them is caused by ubuntu and it's package management. This was just a tipping point to finally leave that abusive relationship with my tool and set it up so that I can actually use it instead of figuring out what's broken now and next.

5
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 

Maybe this hivemind can help out debugging Z-wave network. I recently installed two devices on the network (currently up to 15) with two repeaters, light switches, wall plugs, thermostat and couple battery operated motion sensors.

Before latest addition everything worked almost smoothly, every now and then the motion sensor messages didn't go trough, but it was rare enough that I didn't pay too much attention to it as I have plenty of other stuff to do than tinker with occasional hiccup on home automation.

However for the last 48 hours (or so) the system has become unreliable enough that I need to do something about it. I tried to debug the messages a bit, but I'm not too famliar on what to look for, however these messages are frequent and they seem to be a symptom of an issue:

Dropping message with invalid payload

[Node 020] received S2 nonce without an active transaction, not sure what to do with it

Failed to execute controller command after 1/3 attempts. Scheduling next try in 100 ms.

Specially the 'invalid payload' message appears constantly on the logs. I'd quess that some of the devices is malfunctioning, but other option is that there's somehow a loop on the network (I did attempt to reconfigure the whole thing, didn't change much) or that my RaZberry 7 pro is faulty.

Could someone give a hint on how to proceed and verify which the case might be?

Edit: I'm running Home Assistant OS on a raspberry pi 3.

 

I've been trying to get a bar graph from nordpool electricity prices, but for some reason the graph style won't change no matter how I try to configure it.

I'm running Home assistant OS (or whatever that was called) on a raspberry pi 3:

  • Home Assistant 2023.10.1
  • Supervisor 2023.10.0
  • Operating System 10.5
  • Frontend 20231005.0 - latest

Currently my configuration for the card is like this:

type: custom:mini-graph-card
name: Pörssisähkö
entities:
  - entity: sensor.nordpool
    name: Pörssisähkö
    group-by: hour
    color: '#00ff00'
    show:
      graph: bar

But no matter how I try to change that the graph doesn't change and there's also other variables, like line graph with/without fill which doesn't work as expected. Granted, I'm not that familiar with yaml nor home assistant itself, but this is something I'd expect to "just work" as the configuration for mini-graph-card is quite simple. It displays correct data from the sensor, but only in a line format.

Is this something that recent update broke or am I doing something wrong? I can't see anything immediately wrong on any logs nor javascript console

 

cross-posted from: https://derp.foo/post/250090

There is a discussion on Hacker News, but feel free to comment here as well.

 

I've noticed for a week or so now that Sopuli has somewhat frequent (almost daily) issues with gateway timeouts. I guess that "web facing" process crashes for some reason, but this got me wondering if the issue is known problem with lemmy or something else and more importantly is there something we could help with?

 

For couple of hours browsing New/All the feed is filled with spambots advertising porn on a very shady domains. Lemmy doesn't seem to have option to report users so I've been just blocking them (there's not THAT many to block after all) but that got me thinking what would be the best way to report spam accounts and how reporting posts actually work.

If I report a post on sopuli does the notification go to admins at spammers home instance since I don't think sopuli admins can do much for individual users on other host? And if there's any tools (existing or planned) which would help with situations like this?

 

This question has already been around couple of times, but I haven't found an option which would allow multiple users and multiple OS's (Linux and Windows mostly, mobile, both android and ios, support would be nice at least for viewing) to conviniently share the same storage.

This has been an issue on my network for quite some time and now when I rebuilt my home server I installed TrueNAS on a VM and I'm currently organizing my collections over there with Shotwell so the question became acute again.

Digikam seems to be promising for the rest than organizing the actual files (which I can live with, either shotwell or a shell script to sort them by exif-dates), but I haven't tried that yet with windows and my kubuntu desktop seems to only have snap-package of that without support for external SQL.

On "editing" part it would be pretty much sufficient to tag photos/folders to contain different events, locations and stuff like that, but it would be nice to have access to actual file in case some actual editing needs to be done, but I suppose SMB-share on truenas will accomplish that close enough.

Other need-to-have feature is to manage RAW and JPG versions of the same image at least somehow. Even removing JPGs and leaving only RAW images would be sufficient.

And finally, I really like to have the actual files laying around on a network share (or somewhere) so that they're easy to back up, copy to external nextcloud for sharing and in general have more flexibility in the future in case something better comes up or my environment changes.

view more: ‹ prev next ›