Turn off computer boot from previous day's image, wipe current day's image, continue using computer.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
That's all well and good, but many of these Windows machines were headless or used by extremely non-technical people - think tills at your supermarket or airport check-in desks. Worse, some of these installations were running in the cloud, so console access would have been tricky.
The cloud systems would have been a problem. Any local systems, a non-technical user, could have easily done because their IT department could simply tell them, turn on your computer, and when it gets to this screen with these words, press the down arrow key one time and press enter, and your computer will boot normally.
You wildly overestimate the average person's willingness to do that.
Their willingness to do it would primarily come from the fact that they have a job to do, and if their co-workers are doing their jobs because they followed the instruction and they are not, then the boss is going to have a nice look at them.
This relies on the assumption that everyone else, or at least a significant portion, in the office managed to do it.
I'm not talking about whether or not they're actually physically capable of it, of course they are. Im talking about how people immediately shut down and pretend they can't follow simple directions the second something relates to a compute.
Mmmm. Fair point
Yeah but there’s also always one guy in the group (me) who knows what they’re doing and could just spend an hour doing it for everyone else.
You clearly haven't worked a help desk if you think even those simple instructions are something every end user is capable of or willing to do without issue.
I guess I had really good colleagues. I was the network administrator for a small not-for-profit organization and the only time people came to me with computer problems was when they had tried the things that they knew worked first. If the obvious answers did not fix the problem, then they would bring it to my attention.
…until the CrowdStrike agent updated, and you wind up dead in the water again.
The whole point of CrowdStrike is to be able to detect and prevent security vulnerabilities, including zero-days. As such, they can release updates multiple times per day. Rebooting in a known-safe state is great, but unless you follow that up with disabling the agent from redownloading the sensor configuration update again, you’re just going to wing up in a BSOD loop.
A better architectural solution like would have been to have Windows drivers run in Ring 1, giving the kernel the ability to isolate those that are misbehaving. But that risks a small decrease in performance, and Microsoft didn’t want that, so we’re stuck with a Ring 0/Ring 3 only architecture in Windows that can cause issues like this.
That assums the file is not stored on a writable section of the filesystem and treated as application data and thus wouldn't survive a rollback. Which it likey would.
Wouldn't help (on its own), you'd still get auto-updated to the broken version.
If I'm correct wasn't a fix found and deployed within several hours, so the next auto update would not have likely had the same issue.
I’m familiar enough with Linux but never used an immutable distro. I recognize the technical difference between what you describe and “go delete a specific file in safe mode”. But how about the more generic statement? Is this much different from “boot in a special way and go fix the problem”? Is any easier or more difficult than what people had to do on windows?
Primarily it's different because you would not have had to boot into any safe mode. You would have just booted from the last good image from like a day ago and deleted the current image and kept using the computer.
What’s the user experience like there? Are you prompted to do it if the system fails to boot “happily”?
Honestly, I'm actually not sure as I never had the system break that badly while I was using it.
lol thanks for the answer. This is the really relevant bit isn’t it? My Linux machines have also never died this badly before. But I’ve seen windows do it a number of times before this whole fiasco.
I don't think any of the major distros do it currently (some are working twards it tho), but there are ways (primarily/only one I know is with systemd-boot
). It invokes one of the boot binaries (usually "Unified Kernel Images") that are marked as "good" or one that still has "tries left" (whichever is newer). A binary that has "tries left" gets that count decremented when the boot is unsuccessful and when it reaches 0 it is marked as "bad" and if it boot successfully it gets marked as "good".
So this system is basically just requires restarting the system on an unsuccessful boot if it isn't done already automatically.
Would still need to be on site.
True
Realistically, immutability wouldn't have made a difference. Definition updates like this are generally not considered part of the provisioned OS (since they change somewhere around hourly) and would go into /var
or the like, which is mutable persistent state on nearly every otherwise immutable OS. Snapshots like Timeshift are more likely to help.
It's a huge reason why I use BTRFS snapshots. I'm a bit more lax about what gets snapshotted on my desktop, but on a server, everything should live in a snapshot. If an update goes bad, revert to the last snapshot (and snapshots are cheap, so run one with every change and delete older ones).
Anything that’s updated with the OS can be rolled back. Now Windows is Windows so Crowdstrike handles things it’s own way. But I bet if Canonical or RedHat were to make their own versions of Crowdstrike, they would push updates through the o regular packages repo, allowing it to be rolled back.
Laypeople couldn't fix it even more.
Immutable, not really a difference. Bad updates can still break the OS.
AB root, however, it would be much easier to fix, but would still be a manual process.
Aren't most immutable Linux distros AB, almost by definition? If it's immutable, you can't update the system because it's immutable. If you make it mutable for updates, it's no longer immutable.
The process should be:
- Boot from A
- Install new version to B
- Reboot into B
- If unstable, go to 1
- If stable, repeat from 1, but with A and B swapped
That's how immutable systems work. The main alternative is a PXE system, and in that case you fix the image in one place and power cycle all your machines.
If you're mounting your immutable system as mutable for updates, congratulations, you have the worst of immutable and mutable systems and you deserve everything bad that happens because of it.
idk if it would be manual, isn't the point of ab root to rollback if it doesn't properly boot afterwards?
Honestly if you're managing kernel and userspace remotely it's your own fault if you don't netboot. Or maybe Microsoft's don't know what the netboot situation looks like in windows land.
If the sensor was using eBPF (as any modern sensor on Linux should) then the faulty update would have made the sensor crash, but the system would still be stable. But CrowdStrike has a long history of using stupid forms of integration, so I wouldn't put it past them to also load a kernel module that fucks things up unless it's blacklisted in the bootloader. Fortunately that kind of recovery is, if not routine, at least well documented and standardized.
I did hear that one of their newer versions does use eBPF, but I haven't even remotely looked into it.
They do have a bpf sensor. It's still shite, managing to periodically peg a CPU core on an idle system. They just lifted and shifted their legacy code into the bpf sensor, they don't actually make good use of eBPF capabilities.
You mean like NixOS?
It wouldn't technically stop anything, it would just make your live Hell on Earth if you tried to add that self-updating ring-0 proprietary software in your servers.
But I guess what you are looking for is immutable infrastructure? That one would stop the problem.
In the best case it could automatically reboot into working configuration.
And download the update again
No we are having some fun!
Nixos wouldn't have had any issues, it maintains state information based on configuration and you can choose to load an older boot image during bootloader. Other immutable distros it depends on how they work
Nixos still let's discord and steam download their core files independently of the configuration. These get stored in the users home dir but are effectively not part of the immutable promise. I believe that the crowdstrike problem was caused by a file updated in a similar manor. So would have been an issue on any distro. That is one big problem with a driver relying on files outside the package managers control. At least with steam and discord they cannot take your whole system down.
My understanding is the main problem here is that the machines became effectively unbootable. This wouldn't happen in nixos because if setup properly all core system files are handled by nixos itself. That being said obviously it depends on how a user manages their system.
Ideally yes. All core files would be handled by nixos. Except I doubt that is how crowedstrike would work on nixos if it existed on nixos.
Crowedstrikes downloads and manages it own definition file that gets updated multiple times per day. It is this file that was malformed causing the driver to break. This needs to be updated regularly, more then other packages and so would very likely not be something managed by nix package manager but more treated as application data and outside the scope of the nix package manager.
This is how updates to steam and discord are handled in nixos. Only the core updater is packaged and the rest of the application is self managed. So there is a precedence for this behaviour on nixos (although these won't break your system if a bad update happens as the files are in your user dir).
None. You'd still have to be on site for every machine.