gomp

joined 2 years ago
[–] [email protected] 4 points 2 weeks ago

Is git bisect what you are looking for?

[–] [email protected] 1 points 4 weeks ago

I too experimented with k3s, but then abandoned the idea of using it after I realized the proper way to run postgres on it was (IIUC) to use bitnami's helm chart. I like to have some level of understanding of how my homelab and it's config works, and that humongous amount of unreadable templates was not appealing in the least.

As for containers, I am not really looking for service isolation (IIUC until ##368565 lands, all virtualisation.oci-containers basically run as root and I'm fine with that*)... I just want to be able to run different (usually more recent, but in nixos one also can't easily "pin" an older version of a package if the need arises **) versions of services than those packaged is nixos. Also, not all services I want to run are available as nixos packages, and even less have modules.

* I know what risk I'm running (more or less): nothing in my homelab is accessible from outside my lan and, even if the container host was somehow pwned, that machine can't really do much harm (the important stuff is on a separate one).

** I guess I could import an older version of nixpkgs in my flake, but that requires way too much editing just to pin a package (time I'd rather spend solving the actual issue).

[–] [email protected] 2 points 1 month ago (1 children)

Thanks that was really helpful!

In my case, the system did not have a default route - I've updated the post with details.

 

I'm rebuilding my home server in nixos.

Rather that configuring the various services natively in nixos, I decided to run containers via virtualisation.oci-containers whenever possible, mostly to be able to independently update the system and the various services.

Everything is going smoothly, but whenever I (for whatever reason) do nixos-rebuild boot and reboot after adding a container instead of nixos-rebuild switch, I run into this issue where podman isn't able to resolve the host (below you see the docker hub host, but it also happened with ghcr.io):

podman-apprise-start[1352]: Trying to pull docker.io/caronc/apprise:1.1.8...
podman-apprise-start[1352]: Pulling image //caronc/apprise:1.1.8 inside systemd: setting pull timeout to 5m0s
podman-apprise-start[1352]: Error: initializing source docker://caronc/apprise:1.1.8: pinging container registry registry-1.docker.io: Get "https://registry-1.docker.io/v2/": dial tcp: lookup registry-1.docker.io: no such host

I thought that my podman-* services were missing a dependency on network-online and that they were started before the network was available, but it is't the case:

# systemctl list-dependencies podman-apprise.service 
podman-apprise.service
● ├─system.slice
● ├─network-online.target
● │ └─systemd-networkd-wait-online.service
● └─sysinit.target
●   ├─dev-hugepages.mount
[...snip...]

Do you happen to know what the issue is?

PS: Manually running systemctl start podman-whatever once fixes the issue, of course, but I wonder if there's a more robust solution?


update:

After investigating based on balsoft input below, the issue seems to be that systemd-networkd-wait-online doesn't behave as expected (by me).

Basically, systemd-networkd-wait-online waits for network interfaces to have a carrier (working ethernet cable) and an IP address. This is what in systemd-networkd docs is called the "degraded" state (no, it doesn't mean that something got worse than before... don't think too much of what "degraded" implies in English).

In my case, I have an interface that is setup via DHCP and that also has static IPs assigned:

$ cat /etc/systemd/network/00-lan1.network 
[Match]
Name=lan1

[Network]
DHCP=ipv4
IPv6AcceptRA=no
LinkLocalAddressing=no

[Address]
Address=192.168.10.10/24

[Address]
Address=192.168.10.99/24

If you are wondering, the reason I do this is that I want static IPs for my dns server and reverse proxy, but I also want my home server to use DHCP to fetch some network-wide configuration which, critically, includes the default route.

Back to the issue: IIUC, since the interface has a non-link-local address (which systemd-networkd confusingly calls a "routable" address), it is immediately considered "routable" (a state that is moar better than "degraded") and so not only it's basically ignored by the default systemd-networkd-wait-online configuration, but even adding

[Link]
RequiredForOnline=routable

to /etc/systemd/network/00-lan1.network doesn't make a difference whatsoever.

For now, my stopgap solution is to explicitly set the default route for the "lan1" network:

[Network]
Gateway=192.168.10.1

this seems to solve the issue with podman and, while the system still thinks to be "online" before being fully configured, it will suffice until I find a more elegant/robust way (ping me in a while if you are interested).

refs:
systemd-networkd-wait-online man page
systemd-networkd docs on "RequiredForOnline"
networkctl man page

1
submitted 3 months ago* (last edited 3 months ago) by [email protected] to c/[email protected]
 

I remember a story where people asked about blobs included in Ventoy and there were no comments from the devs, leading to suspicion.

At the time it wasn't clear to me if there was any substance to the story or if it was the usual Internet exaggeration, so I resolved to ignore it for the time being and saved a reminder to look into it after a while.

Now my reminder fired off and I looked around, but couldn't find how the story ended... do you know?

 

I have two functions that are similar but can fail with different errors:

#[derive(Debug, thiserror::Error)]
enum MyError {
  #[error("error a")]
  MyErrorA,
  #[error("error b")]
  MyErrorB,
  #[error("bad value ({0})")]
  MyErrorCommon(String),
}

fn functionA() -> Result<String, MyError> {
  // can fail with MyErrorA MyErrorCommon
  todo!()
}

fn functionB() -> Result<String, MyError> {
  // can fail with MyErrorB MyErrorCommon
  todo!()
}

Is there an elegant (*) way I can express this?

If I split the error type into two separate types, is there a way to reuse the definition of MyErrorCommon?


(*) by "elegant" I mean something that improves the code - I'm sure one could define a few macros and solve that way, but I don't want to go there

edit: grammar (rust grammar)

 

I experimented with several ways to run my services:

  1. "regular" systemd services (services.glance = { ... };)
  2. nix containers (containers.glance = { ... };)
  3. podman containers (virtualisation.oci-containers.containers.glance = { ... })

and I must say I'm starting to appreciate the last option (the least nixos-y) more and more.

Specifically, I appreciate that:

  • I just have to learn the app/container configuration, instead of also backwards-translating from their config into the various nixos options (of course the .yaml or whatever configuration files are still generated from my nixos config, I just do that in a derivation instead on relying on a module doing it for me)
  • Services are sometimes outdated in nixpks (even in unstable - and juggling packages between stable and unstable is yet another complication)
  • I feel like it's more secure (very arguable and also of very little consequence since everything is on my homelab... it's mainly for the warm fuzzies)

Do you guys use one of the options above? Something different?

[–] [email protected] 1 points 4 months ago (3 children)

Given that it downloads random shit from the internet

You seem to trust the javascript ecosystem just as much as I do :)

Jokes aside, the repo has a lock file so it should actually be fine (time will tell)

[–] [email protected] 1 points 4 months ago (5 children)

Found the solution (I think): basically it should just work as expected if you just add outputHashAlgo, outputHashMode and outputHash to your derivation.

documentation
article

 

edit: for the solution, see my comment below

I'm trying to package a go application (beszel) that bundles a bunch of html stuff built with bun (think, npm).

The html is generated by running bun install and bun run and then embedded in the go binary with //go:embed.

Being completely ignorant of the javascript ecosystem, my first idea was to just replicate what they do in the Makefile

postConfigure = ''
bun install --cwd ./site
bun run     --cwd ./site build
'' 

but, since bun install downloads dependencies from the net, that fails.

I guess the "clean" solution would be to look for buildNpmPackage or similar (assuming that exists) and let nix manage all the dependencies, but... it's some 800+ dependencies (at least, bun install ... --dry-run lists 800+ things) so that's a hard pass.

I then tried to look at how buildGoPackage handles the vendoring of dependencies, with the idea of replicating that (it dowloads what's needed and then compare a hash of what was downloaded with a hash provided in the nix package definition), but... I can't for the life of me decipher how nixpkgs' pkgs/build-support/go/module.nix works.

Do you know how to implement this kind of vendoring in a nix derivation?

[–] [email protected] 1 points 5 months ago

In case anyone comes here with the same problem, the solution is:

attoparsec-aeson = haskellPackages.mkDerivation {
  ...
  postUnpack = ''
    mv source source-aeson
    cp -rL source-aeson/attoparsec-aeson source
    rm -fr source-aeson
  '';
  ...
};
```*___*
 

Over the years I have accumulated a sizable music library (mostly flacs, adding up to a bit less than 1TB) that I now want to reorganize (ie. gradually process with Musicbrainz Picard).

Since the music lives in my NAS, flacs are relatively big and my network speed is 1GB, I insalled on my computer a hdd I had laying around and replicated the whole library there; the idea being to work on local files and the sync them to the NAS.

I setup Syncthing for replication and... everything works, in theory.

In practice, Syncthing loves to rescan the whole library (given how long it takes, it must be reading all the data and computing checksums rather than just scanning the filesystem metadata - why on earth?) and that means my under-powered NAS (Celeron N3150) does nothing but rescanning the same files over and over.

Syncthing by default rescans directories every hour (again, why on earth?), but it still seem to rescan a whole lot even after I have set rescanIntervalS to 90 days (maybe it rescans once regardless when restarted?).

Anyway, I am looking into alternatives.
Are there any you would recommend? (FOSS please)

Notes:

  • I know I could just schedule a periodic rsync from my PC to the NAS, but I would prefer a bidirectional solution if possible (rsync is gonna be the last resort)
  • I read about unison, but I also read that it's not great with big jobs and that it too scans a lot
  • The disks on my NAS go to sleep after 10 minutes idle time and if possible I would prefer not waking them up all the time (which would most probably happen if I scheduled a periodic rsync job - the NAS has RAM to spare, but there's no guarantee it'll keep in cache all the data rsync needs)
[–] [email protected] 2 points 5 months ago

That's the thing you want to build (a single project may generate multiple executables - eg. a server and a client) so it won't help in this case but... I must say, I am impressed and really grateful that you went and looked that up for me! Thanks, mate!

[–] [email protected] 2 points 5 months ago

cabal2nix doesn't care about any source-repository-package in cabal.project (I think it doesn't even read that file?).

In my case, it generated a project that depended on the aeon from nixpkgs (which IIUC in turn comes from hackage) rather than the forked version.

[–] [email protected] 2 points 5 months ago (2 children)

I agree: flakes are great for development (and not only)!

Unfortunately I still need to build that third party project from source :)
Maybe I should look into disregarding the whole haskellPackages infrastructure and just build with cabal via a shell script.. IDK if that would be accepted in nixpkgs though :/

 

edit: for the solution, see my comment below

I need/want to build aeson and its subproject attoparsec-aeson from source (it's a fork of the "official" aeson), but I'm stuck... can you help out?

The sources of attoparsec-aeson live in a subdirectory of the aeson ones, so I have the sources:

aeson-src = fetchFromGitHub {
  ...
};

and the "main" aeson library:

aeson = haskellPackages.mkDerivation {
  pname = "aeson";
  src = aeson-src;
  ...
};

When I get to attoparsec-aeson however I run into a wall: I tried to follow the documentation about sourceRoot:

attoparsec-aeson = haskellPackages.mkDerivation {
  pname = "attoparsec-aeson";
  src = aeson-src;
  sourceRoot = "./attoparsec-aeson"; # maybe this should be "${aeson-src}/attoparsec-aeson"?
                                     # (it doesn't work either way)
  ...
};

but I get

 error: function 'anonymous lambda' called with unexpected argument 'sourceRoot'

Did I fail to spot some major blunder (I am nowhere near an expert)? Does sourceRoot not apply to haskellPackages.mkDerivation? What should I do to make it work?

BTW:

IDK if this may cause issues, but the attoparsec-aeson sources include symlinks to files in the "main" attoparsec sources:

~/git-clone-of-attoparsec-sources $ tree attoparsec-aeson/
attoparsec-aeson/
├── src
│   └── Data
│       └── Aeson
│           ├── Internal
│           │   ├── ByteString.hs -> ../../../../../src/Data/Aeson/Internal/ByteString.hs
│           │   ├── Text.hs -> ../../../../../src/Data/Aeson/Internal/Text.hs
│           │   └── Word8.hs -> ../../../../../src/Data/Aeson/Internal/Word8.hs
│           ├── Parser
│           │   └── Internal.hs
│           └── Parser.hs
├── attoparsec-aeson.cabal
└── LICENSE
 

Lately I noticed that when I want to ssh to a server using a password I need to specify -o PubkeyAuthentication=no or I won't be asked for a password and the authentication will fail (well, for all I know, setting some other option may work too).

I use password authentication only once on freshly installed servers/vms, so it's not a huge deal, but... it still bothers me (mainly because I don't remember which option to set).

Do you guys have any idea what it may be?

client's ~/.ssh/config

Host 127.*.*.* 192.168.*.* 10.*.*.* 172.16.*.* 172.17.*.* 172.18.*.* 172.19.*.* 172.2?.*.* 172.30.*.* 172.31.*.*
  LogLevel quiet
  Stricthostkeychecking no
  Userknownhostsfile /dev/null

Host *
  ForwardAgent no
  AddKeysToAgent no
  Compression yes
  ServerAliveInterval 10
  ServerAliveCountMax 3
  HashKnownHosts no
  UserKnownHostsFile ~/.ssh/known_hosts
  ControlMaster no
  ControlPath ~/.ssh/master-%r@%n:%p
  ControlPersist no

server's /etc/ssh/sshd_config (it's from the nixos install iso)

AuthorizedPrincipalsFile none
Ciphers [email protected],[email protected],[email protected],aes256-ctr,aes192-ctr,aes128-ctr
GatewayPorts no
KbdInteractiveAuthentication yes
KexAlgorithms [email protected],curve25519-sha256,[email protected],diffie-hellman-group-exchange-sha256
LogLevel INFO
Macs [email protected],[email protected],[email protected]
PasswordAuthentication yes
PermitRootLogin yes
PrintMotd no
StrictModes yes
UseDns no
UsePAM yes
X11Forwarding no
Banner none
AddressFamily any
Port 22
Subsystem sftp /nix/store/78mv13w9mgh0s0rd7rnr6ff4d7a39bpd-openssh-9.7p1/libexec/sftp-server 
AuthorizedKeysFile %h/.ssh/authorized_keys /etc/ssh/authorized_keys.d/%u
HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_ed25519_key

 

Solution:
hd-idle is the way to go (if you read their README, they explain that most drives don't support idle timers)

I've been looking into spinning down the drives of my NAS, as I use it infrequently and that brings power drain down from ~30W to ~17W.

Problem is, hdparm -S doesn't seem to do anything for these particular drives: if I set it and wait for the appropriate amount of time (eg. 5 seconds if set to 1) the drives are still reported as "active/idle" and power drain doesn't go down.

Both hdparm -y and hdparm -Y work fine, but I don't seem to be able to find settings for them in tlp (probably because they are commands rather than settings?).

Besides the caveats about disks living longer if they are kept spinning, are there reasons why I shouldn't setup a cron job (well, a systemd timer) that runs hdparm -Y every 10 minutes? (for example, could hdparm -y cause errors if run while the drive is being backed up?)

PS: According to hdparm's manpage, -y puts the drive standby mode while -Y puts it into sleep mode. Considering that in my case power drain seems the same either way, should I prefer one or the other?

 

(I'm just starting off with rust, so please be patient)

Is there an idiomatic way of writing the following as a one-liner, somehow informing rustc that it should keep the PathBuf around?

// nevermind the fully-qualified names
// they are there to clarify the code
// (that's what I hope at least)

let dir: std::path::PathBuf = std::env::current_dir().unwrap();
let dir: &std::path::Path   = dir.as_path();

// this won't do:
// let dir = std::env::current_dir().unwrap().as_path();

I do understand why rust complains that "temporary value dropped while borrowed" (I mean, the message says it all), but, since I don't really need the PathBuf for anything else, I was wondering if there's an idiomatic to tell rust that it should extend its life until the end of the code block.

[–] [email protected] 9 points 6 months ago (1 children)

That, or git log --graph --pretty=oneline

(IDK why people seem to be willing to recommend using anything in order to learn git, with the exception of git itself)

 

I want to have my screen (the "dev" workspace) split in three "zones":

  • on the left side, a tabbed group with all the text editors I start (ie. if I start a new one, it goes there in a new tab)
  • on the top-right, a tabbed group of whatever many terminal I feel like launching
  • on the bottom-right, my browsers (and possibly other stuff), in a group without tabs
  • a key combination to cycle between: all three "zones" visible, text editors on the left - terminal on the right, text editors on the left - browser on the right, fullscreen browser

So far I've been looking at hyprland (for no particular reason except the hype) and I don't think I can do the above with it (I am by no means an expert, so... maybe it can actually be done?).

Do you know of any WM where it would be possible? (possibly, one with automatic splitting a-la bspwm, that I would use for the other workspaces)

 

I've been looking around for a scripting language that:

  • has a cli interpreter
  • is a "general purpose" language (yes, awk is touring complete but no way I'm using that except for manipulating text)
  • allows to write in a functional style (ie. it has functions like map, fold, etc and allows to pass functions around as arguments)
  • has a small disk footprint
  • has decent documentation (doesn't need to be great: I can figure out most things, but I don't want to have to look at the interpter source code to do so)
  • has a simple/straightforward setup (ideally, it should be a single executable that I can just copy to a remote system, use to run a script and then delete)

Do you know of something that would fit the bill?


Here's a use case (the one I run into today, but this is a recurring thing for me).

For my homelab I need (well, want) to generate a luhn mod n check digit (it's for my provisioning scripts to generate synchting device ids from their certificates).

I couldn't find ready-made utilities for this and I might actually need might a variation of the "official" algorithm (IIUC syncthing had a bug in their initial implementation and decided to run with it).

I don't have python (or even bash) available in all my systems, and so my goto language for script is usually sh (yes, posix sh), which in all honestly is quite frustrating for manipulating data.

[–] [email protected] 2 points 10 months ago (1 children)

There's the readDir builtin, but I expect nix might complain if you use that

[–] [email protected] 9 points 10 months ago* (last edited 10 months ago)

IDK about the issue itself but I must say the discourse post does a mighty good job at explaining the reasons the guy was NOT suspended for and a really terrible one at explaining what they actually did suspend him for, except pissing off "the moderation team, in consultation with members of the Foundation board".

view more: next ›