Tesla is just getting exactly what they paid for!
Nope. I don’t talk about myself like that.
Tesla is just getting exactly what they paid for!
Please, enlighten me how you’d remotely service a few thousand Bitlocker-locked machines, that won’t boot far enough to get an internet connection,
Intel AMT.
Nah. The person you responded to asked a facetious question. You started being pedantic.Everyone know what it means when someone says something is bricked.
Nah, it’s fair enough. I’m not trying to start an argument about any of this. But ya gotta talk in terms that the insurance people talk in (because that’s what your c-suite understand it in). If you say DR… and didn’t actually DR… That can cause some auditing problems later. I unfortunately (or fortunately… I dunno) hold the C-suite position in a few companies. DR is a nasty word. Just like “security incident” is a VERY nasty phrase.
So it isn’t whether you’re using Azure, it’s whether you’re using CrowdStrike (Azure related or not)
No. Azure platform is using Crowdstrike on their hypervisors. So simply using Azure could be sufficient to hurt you in this case even if your Azure host isn’t using Crowdstrike itself. But yes, otherwise it’s a mix of Windows+Crowdstrike.
AND even then you can reflash the bios, its time consuming and costly but you can.
then nothing can be bricked because on paper you can desolder the rom chip and put another one in place.
If you want to be stupidly pedantic about shit, then nothing is anything.
You can absolutely start writing garbage to bios and brick the mobo firmware.
Yes, but Azure platform itself was using it. So many of those systems were down overnight (and there’s probably still stragglers). The guy you responded to specifically called out Azure-based services.
Yeah I can only imagine trying to walk someone through an offsite system that got bitlocked because you need to get into safe-mode. reimage from scratch might just be a faster process. Assuming that your infrastructure is setup to do it automatically through network.
The stuff I copied into the end of my comment is direct from CrowdStrike,.
Eh. This particular issue is making machines bluescreen.
Virtualized assets, If there’s a will there’s a way. Physical assets with REALLY nice KVMs… you can probably mount up an ISO to boot into to remove the stupid definitions causing this shit. Everything else? Yeah… you probably need to be there physically to fix it.
But I will note that many companies by policy don’t allow USB insertion… virtually or not. Which will make this considerably harder across the board. I agree that I think the majority could be fixed remotely. I don’t think the “other” categories are only 1%… I think there’s many more systems that probably required physical intervention. And more importantly… it doesn’t matter if it’s 100% or 0.0001%… If that one system is the one that makes the company money… % population doesn’t matter.
https://finance.yahoo.com/quote/CRWD/
Not enough… only down 8.9% and it even rebounded overnight…
I think we’re defining disaster differently. This is a disaster.
I’ve not read a single DR document that says “research potential options”. DR stuff tends to go into play AFTER you’ve done the research that states the system is unrecoverable. You shouldn’t be rolling DR plans here in this case at all as it’s recoverable.
I imagine CrowdStrike pulled the update
I also would imagine that they’d test updates before rolling them out. But we’re here… I honestly don’t know though. None of the systems under my control use it.
Literally every program you outlined here are not Windows exclusive. If you let people lecture you on this shit that’s on you.
If you want to run around without edr/xdr software be my guest.
I don’t think anyone is saying that… But picking programs that your company has visibility into is a good idea. We use Wazuh. I get to control when updates are rolled out. It’s not a massive shit show when the vendor rolls out the update globally without sufficient internal testing. I can stagger the rollout as I see fit.
I mean - this is just a giant test of disaster recovery plans.
Anyone who starts DR operations due to this did 0 research into the issue. For those running into the news here…
CrowdStrike Blue Screen solution
CrowdStrike blue screen of death error occurred after an update. The CrowdStrike team recommends that you follow these methods to fix the error and restore your Windows computer to normal usage.
Rename the CrowdStrike folder
Delete the “C-00000291*.sys” file in the CrowdStrike directory
Disable CSAgent service using the Registry Editor
No need to roll full backups… As they’ll likely try to update again anyway and bsod again. Caching servers are a bitch…
LMFAO Blizzard still hasn’t fucking learned? I walked away from them years ago. But I thought the point of WoW Classic was for them to just leave it the fuck alone. The game was already at it’s peak. Stop fucking with it.
Ebay and decommission. I got really lucky on my SSDs, those were all from a decommission. Company was going to pay an ITAD for destruction. I picked it all up and wiped it on site. The rest are relatively cheap hardware, supermicros and such… but with enough of them you can build a resilient cluster.
A lot of my stuff is Ebay… I did recently purchase a new rack as probably the only “new” item I have in regards to my setup. The old one had issues… and I didn’t want to deal with thrifting broken racks anymore. And I needed a taller 45U rack rather than a 42U standard rack… Also the more depth means I can accommodate the 60 bay server in the future if it comes to that.
But things like 40gbps networking… ebay. The proxmox servers are decomissioned. the truenas server was ebay. switches was ebay… Oh! The firewalls… That was new purchase. I am stupid lucky to live somewhere with 8gbps fiber. I needed real horsepower to push that with IDS/IPS enabled. So this was a new purchase from supermicro. The SAS spinning rust drives I picked up on Reddit homelabsales or something like that a while back. PDU’s were ebay… UPS were ebay… Expansion batteries were craigslist. Most cables were new from FS
Previous versions of my rack were government liquidation/auctions. My dad has a lot of that equipment now. I found one auction that was 1400$ that was basically a whole racks worth of shit… most of it pretty usable 12 and 13th gen dells. And another auction for 600$ that had a dell m1000e with some 4TB of DDR4 ram…
But you can do a lot of this shit with a cluster of little N100 boxes if you really wanted. I just happened to get my hands on enterprise level equipment… So I joined the Romans…
Agreed, I don’t blame the publishers for this. It’s clearly working on some amount of population that makes it worthwhile when they do the spreadsheets. The only beta game I’ve purchased recently lets you self-host servers and I was happy with the state it was in even if it was dropped and died all together. I refuse to purchase just about anything else that is still in “beta” or “early access”. I remember when “Beta” meant “download this game and play it… If you like it you can buy it next month”.
It’s that population that actively makes games worse for all of us as publishers can choose to just be lazy. I was stupid happy when BG3 got the praise it got on launch. That’s what it used to be… that’s how it should be.
Easier to search than many “official” channels…