glizzyguzzler

joined 2 years ago
[–] [email protected] 2 points 3 weeks ago (1 children)

So extra background, I was put off by proxmox’s weird steps to get ISO’s onto the system via USB so I was like “I am not touching the backup stuff” and just rolled my own (I treat the VMs/containers on my proxmox server like individual servers and back them up accordingly and do not back up the underlying proxmox instance itself).

I see proxmox has a similar pruning setting to Restic, and it exports the files like incus. So I’d say yes, proxmox is one-stop-shop for backup while with incus you have to put its container export options and restic together and put that in a cron job.

Still hard to say what I’d definitively tell a newbie to go with. I found (and still find) the proxmox ui daunting and difficult while the incus UI makes much more sense to me and is easier (has an ISO pulling system built in for instance. But as you’ve pointed out - proxmox gives you an easy way to have robust backups that takes much more effort on the incus side.

As backups are paramount, proxmox for a total newbie. If someone is familiar with scripting, then incus - because it needs scripted backups to be as robust as proxmox’ backups. @[email protected] this conclusion should help you choose proxmox (most likely)!

[–] [email protected] 1 points 3 weeks ago (3 children)

https://linuxcontainers.org/incus/docs/main/howto/instances_backup/#instances-backup-export

A bit down from the snapshots section is the export section, what I do is I export to a place then back it up with Restic. I do not compress on export and instead do it myself with the —rsyncable flag added to zstd. (Flag applies to gzip too) With the rsyncable flag incremental backups work on the zip file so it’s space efficient despite being compressed. I don’t worry about collating individual zip files, instead I rely on Restic’s built-in versioning to get a specific version of the VM/container if I needed it.

Also a few of my containers I linked the real file system (big ole data drive) into the container and just snapshot the big ole data drive/send said snapshot using the BTRFS/ZFS methods cause that seemed easier, those containers are easy enough to stand up on a whim and then just need said data hooked up.

I also restic the sent snapshot since snapshots are write-static and restic can read from it at its leisure. Restic is the final backup orchestrator for all of my data. One restic call == one “restic snapshot” so I call it monolithically with one call covering several data sources.

Hope that helps!

[–] [email protected] 1 points 3 weeks ago* (last edited 3 weeks ago) (5 children)

https://linuxcontainers.org/incus/docs/main/howto/instances_backup/#instances-snapshots

This describes the jist, it’s all about snapshots! Incus loves BTRFS/ZFS.

There’s no true need for stop everything as far as I can tell.

Stop everything is applicable for databases for any backup system (snapshot avoids backing up a database mid write (guaranteed failure) but the snapshot could be during a live database multi-step operation and while intact is left in a cursed state). For databases I make sure to stop and backup (SQLite losers) or backup live (Gods’ chosen Postgres) specially so no very niche database failures occur even though it was done with instant/write-safe snapshots!!

Recovery plan is restore snapshot and if 0.1% chance of database bad bc was mid big multiple step operation then I have the .gz to restore from.

[–] [email protected] 2 points 3 weeks ago* (last edited 3 weeks ago)

There is a larger community. I have proxmox and incus on two devices and for the basics (LXC container/VM) Incus is way more straight forward. Ditchin proxmox next reinstall on the other device (that proxmox install is the OS version). If you’re doing regular stuff it’s easy enough even with the reduced community! They’ve covered the basics well.

But again, proxmox community is larger. I started with it for that reason too.

[–] [email protected] 4 points 3 weeks ago (9 children)

Since you’re not using proxmox as an OS install, why not check out Incus? It accomplishes the same goals as proxmox but is easier to use (for me at least). Make sure you install incus’ web ui, makes it ez pz. Incus does the VMs and containers just like proxmox but isn’t focused on clustering 1st but rather machine 1st. It does do clustering, but the default UI is set for your machine to start so it makes more sense to me. The forums are very useful and questions get answered quickly, and there’s an Ubuntu-only fork called LXD which expands the available pool of answers. (For now, almost all commands are the same between Incus and LXD). I run the incus stable release from the Zabbly package repo, I think the long term release doesn’t have the web ui yet (I could be wrong). Never have had a problem. When Debian 13 hits I’ll switch to whatever is included there and should be set.

https://linuxcontainers.org/incus/docs/main/installing/#installing-from-package

I use incus for VMs and LXC containers. I also have Docker on the Debian system. Many types of containers for every purpose!

I installed incus on a Debian system that I encrypted with LUKS. It unlocks after reboots with a USB drive, basically I use it like a yubikey but you could leave it in so the system always reboots no problem. There’s also a network unlock too but I didn’t try to figure that out. Without USB drive or network, you’ll have to enter the encryption key on every reboot.

[–] [email protected] 4 points 1 month ago

Not a doctor, but based on research I’ve seent brain fog (in likely many cases) seems to be due to inflammation. https://www.autoimmuneinstitute.org/covid_timeline/brain-fog-likely-caused-by-brain-inflammation-its-not-just-all-in-their-head/

Have your friend try inflammation-reducing drugs like metformin. Metformin specifically, maybe there’s others, I’m sadly not a doctor. Metformin is a magic drug that’s not just for diabetius.

It won’t be immediate, but maybe it could help your friend recover. Idk if cranking yourself will break through when it’s a blocking mechanism causing the problem.

[–] [email protected] 0 points 1 month ago (1 children)

It’s wild, we’re just completely talking past each other at this point! I don’t think I’ve ever gotten to a point where I’m like “it’s blue” and someone’s like “it’s gold” so clearly. And like I know enough to know what I’m talking about and that I’m not wrong (unis are not getting tons of grants to see “if AI can think”, no one but fart sniffing AI bros would fund that (see OP’s requested source is from an AI company about their own model), research funding goes towards making useful things not if ChatGPT is really going through it like the rest of us), but you are very confident in yourself as well. Your mention of information theory leads me to believe you’ve got a degree in the computer science field. The basis of machine learning is not in computer science but in stats (math). So I won’t change my understanding based on your claims since I don’t think you deeply know the basis just the application. The focus on using the “right words” as a gotchya bolsters that vibe. I know you won’t change your thoughts based on my input, so we’re at the age-old internet stalemate! Anyway, just wanted you to know why I decided not to entertain what you’ve been saying - I’m sure I’m in the same boat from your perspective ;)

[–] [email protected] 1 points 1 month ago

You can, but the stuff that’s really useful (very competent code completion) needs gigantic context lengths that even rich peeps with $2k GPUs can’t do. And that’s ignoring the training power and hardware costs to get the models.

Techbros chasing VC funding are pushing LLMs to the physical limit of what humanity can provide power and hardware-wise. Way less hype and letting them come to market organically in 5/10 years would give the LLMs a lot more power efficiency at the current context and depth limits. But that ain’t this timeline, we just got VC money looking to buy nuclear plants and fascists trying to subdue the US for the techbro oligarchs womp womp

[–] [email protected] 1 points 1 month ago

No, they’re right. The “research” is biased by the company that sells the product and wants to hype it. Many layers don’t make think or reason, but they’re glad to put them in quotes that they hope peeps will forget were there.

[–] [email protected] 2 points 1 month ago (1 children)

So close, LLMs work via matrix multiplication, which is well understood by many meat bags and matrix math can’t think. If a meat bag can’t do matrix math, that’s ok, because the meat bag doesn’t work via matrix multiplication. lol imagine forgetting how to do matrix multiplication and disappearing into a singularity or something

[–] [email protected] 1 points 1 month ago

They do not, and I, a simple skin-bag of chemicals (mostly water tho) do say

[–] [email protected] 1 points 1 month ago (3 children)

I was channeling the Interstellar docking computer (“improper contact” in such a sassy voice) ;)

There is a distinction between data and an action you perform on data (matrix maths, codec algorithm, etc.). It’s literally completely different.

An audio codec (not a pipeline) is just actually doing math - just like the workings of an LLM. There’s plenty of work to be done after the audio codec decodes the m4a to get to tunes in your ears. Same for an LLM, sandwiching those matrix multiplications that make the magic happen are layers that crunch the prompts and assemble the tokens you see it spit out.

LLMs can’t think, that’s just the fact of how they work. The problem is that AI companies are happy to describe them in terms that make you think they can think to sell their product! I literally cannot be wrong that LLMs cannot think or reason, there’s no room for debate, it’s settled long ago. AI companies will string the LLMs together and let them chew for a while to try make themselves catch when they’re dropping bullshit. It’s still not thinking and reasoning though. They can be useful tools, but LLMs are just tools not sentient or verging on sentient

 
1
rule (files.catbox.moe)
 
 

I have a bridge device set up with systemd, br0, that replaces my primary ethernet eth0. With the br0 bridge device, Incus is able to create containers/VMs that have unique MAC addresses that are then assigned IP addresses by my DHCP server. (sudo incus profile device add <profileName> eth0 nic nictype=bridged parent=br0) Additionally, the containers/VMs can directly contact the host, unlike with MACVLAN.

With Docker, I can't see a way to get the same feature-set with their options. I have MACVLAN working, but it is even shoddier than the Incus implementation as it can't do DHCP without a poorly-maintained plugin. And the host cannot contact the container due to the MACVLAN method (precludes running a container like a DNS server that the host server would want to rely on).

Is there a way I've missed with the bridge driver to specify a specific parent device? Can I make another bridge device off of br0 and bind to that one host-like? Searching really fell apart when I got to this point.

Also, if someone knows how to match Incus' networking capability with Podman, I would love to hear that. I'm eyeing trying to move to Podman Quadlets (with Debian 13) after I've got myself well-versed with Docker (and its vast support infrastructure to learn from).

Hoping someone has solved this and wants to share their powers. I can always put a Docker/podman inside of an Incus container, but I'd like to avoid onioning if possible.

1
butts rule (files.catbox.moe)
 
1
rule (files.catbox.moe)
1
tithe rule (files.catbox.moe)
 
1
rule (files.catbox.moe)
 
 

Context is:

  • I was luckily banned from the fallen onehundredninetysix for vehemently rejecting the orchestrated hoodwinking

  • luckily banned because i'd have posted boston's sloppiest there like three times before it properly made it to the people's onehundredninetysix

  • I use the default web UI which is aggressively broken on my old phone like the pleb I am

 
1
hiberuletion (files.catbox.moe)
 
1
rule (files.catbox.moe)
 
1
rule (lemmy.blahaj.zone)
 
view more: next ›