Article has been updated with the root cause - Crowdstrike. The reason is simple: Azure has tons of Windows systems that are protected with CrowdStrike Falcon. Crowdstrke released a bad version that is causing boot loops on Windows computers, including Windows VM servers.
At a shallow glance of very limited data not collected for this purpose, it looks like we’ve a tier 1 failure maybe Chicago westbound.
I don’t have enough topical information or expertise to have a discussion about causality or truth. This is not the right venue and I’d just be an observer to whatever conversation was taking place there.
Microsoft, Azure, and Crowdstrike have all stated the root cause at this point. Furthermore, this tells me most of the Falcon sensor installs are done bad, as we also use Crowdstrke and have ours set to “latest version - 1” to ensure this exact thing doesn’t happen.
Cool. But, routers don’t run MS and neither does my organization sitting on either side of the connection. So, right now I don’t give a flying fuck about what what some assholes did to Windows or the root cause. I want my throughput west back on primary so I can keep my hoppers full. Right now it looks like some other assholes fucked up tier 1.
There aren’t any backbone outages right now that are being discussed. Many servers that run MANY services are on Windows, using Crowdstrike. Flights, banks, entertainment (some Netflix, for example).
The overall result: it looks like a backbone outage, but isn’t.
Article has been updated with the root cause - Crowdstrike. The reason is simple: Azure has tons of Windows systems that are protected with CrowdStrike Falcon. Crowdstrke released a bad version that is causing boot loops on Windows computers, including Windows VM servers.
At a shallow glance of very limited data not collected for this purpose, it looks like we’ve a tier 1 failure maybe Chicago westbound.
I don’t have enough topical information or expertise to have a discussion about causality or truth. This is not the right venue and I’d just be an observer to whatever conversation was taking place there.
Microsoft, Azure, and Crowdstrike have all stated the root cause at this point. Furthermore, this tells me most of the Falcon sensor installs are done bad, as we also use Crowdstrke and have ours set to “latest version - 1” to ensure this exact thing doesn’t happen.
Cool. But, routers don’t run MS and neither does my organization sitting on either side of the connection. So, right now I don’t give a flying fuck about what what some assholes did to Windows or the root cause. I want my throughput west back on primary so I can keep my hoppers full. Right now it looks like some other assholes fucked up tier 1.
There aren’t any backbone outages right now that are being discussed. Many servers that run MANY services are on Windows, using Crowdstrike. Flights, banks, entertainment (some Netflix, for example).
The overall result: it looks like a backbone outage, but isn’t.
Thank you.
But, fuck. That means we screwed up primary design or someone broke the contract.
Gotta work today.