this post was submitted on 19 Jul 2024
640 points (100.0% liked)

Technology

71885 readers
4596 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

IT administrators are struggling to deal with the ongoing fallout from the faulty CrowdStrike update. One spoke to The Register to share what it is like at the coalface.

Speaking on condition of anonymity, the administrator, who is responsible for a fleet of devices, many of which are used within warehouses, told us: "It is very disturbing that a single AV update can take down more machines than a global denial of service attack. I know some businesses that have hundreds of machines down. For me, it was about 25 percent of our PCs and 10 percent of servers."

He isn't alone. An administrator on Reddit said 40 percent of servers were affected, along with 70 percent of client computers stuck in a bootloop, or approximately 1,000 endpoints.

Sadly, for our administrator, things are less than ideal.

Another Redditor posted: "They sent us a patch but it required we boot into safe mode.

"We can't boot into safe mode because our BitLocker keys are stored inside of a service that we can't login to because our AD is down.

(page 2) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 13 points 11 months ago

80% of our machines were hit. We were working through 9pm on Friday night running around putting in bitlocker keys and running the fix. Our organization made it worse by hiding the bitlocker keys from local administrators.

Also gotta say... way the boot sequence works, combined with the nonsense with raid/nvme drivers on some machines really made it painful.

[–] [email protected] 11 points 11 months ago

Yet again: Switch to Linux.

[–] [email protected] 8 points 11 months ago

I got super lucky. got paid for my car just before the dealership systems went down, got my return flight 2 days before this shit started.

[–] [email protected] 6 points 11 months ago (1 children)

If it only impacts a percentage of your machines then there was a problem in the deployment strategy or the solution wasn't worthwhile to begin with.

[–] [email protected] 9 points 11 months ago (2 children)

... So your point was that it would have been better if everything went down?

There are plentiful reasons why deployments are done in parts, and I'm guessing that after today strategies will change to apply updates in groups to avoid everything going down.

Also, dear God, stop using windows as a server, or even a client for that matter. If you're paying actual money to get this shit then the results are on you.

[–] [email protected] 4 points 11 months ago

No.

My main point was that crowdstrike has always been lazy man's garbage.

[–] [email protected] 4 points 11 months ago
[–] [email protected] 5 points 11 months ago (4 children)

Why the fuck does an antivirus need a kernel driver

load more comments (4 replies)
[–] [email protected] 4 points 11 months ago

Just a thought from experience: Be wary of any critical products and/or taking a job from a company run by an accountant. CrowdStrike CEO... accountant!

Accounting firms are an obvious exception.

load more comments
view more: ‹ prev next ›