Uh, yes. Physically touching thousands of computers to boot them into safe mode and delete a file is time consuming. It turns out physically touching thousands of machines is time consuming anywhere, especially when it is all of them at once.
Which is why your take is laughably bad. Stick to the tech and not zealotry next time, and maybe not CNN for tech news.
You have no idea what you’re talking about.
The fix is to boot into safe or recovery mode, delete a file, reboot. That’s it.
The reason it takes so long is because millions of PCs are affected, which usually are administered remotely.
So sysadmins have to drive to multiple places, while their usual workloads wait.
On top of that, you need the encryption recovery keys for each PC to boot into safe mode.
Those are often stored centrally on a server - which may also be encrypted and affected.
Or on an Azure file share, which had an outage at the same time.
Maybe some of the recovery keys are missing. Then you have to reinstall the PC and re-configure every application that was running on it.
And when all of that is over, the admins have to get back on top of all the tasks that were sidelined, which may take weeks.
This is a laughably bad take.
You do realize sysadmins were fixing the Windows issue and not just waiting on Microsoft and CrowdStrike - right? They just had to delete a file.
Oh! That’s why the outage could demand long time to recover! Just delete a file takes so long!
I’m glad you said it!
Uh, yes. Physically touching thousands of computers to boot them into safe mode and delete a file is time consuming. It turns out physically touching thousands of machines is time consuming anywhere, especially when it is all of them at once.
Which is why your take is laughably bad. Stick to the tech and not zealotry next time, and maybe not CNN for tech news.
You have no idea what you’re talking about.
The fix is to boot into safe or recovery mode, delete a file, reboot. That’s it.
The reason it takes so long is because millions of PCs are affected, which usually are administered remotely.
So sysadmins have to drive to multiple places, while their usual workloads wait.
On top of that, you need the encryption recovery keys for each PC to boot into safe mode.
Those are often stored centrally on a server - which may also be encrypted and affected.
Or on an Azure file share, which had an outage at the same time.
Maybe some of the recovery keys are missing. Then you have to reinstall the PC and re-configure every application that was running on it.
And when all of that is over, the admins have to get back on top of all the tasks that were sidelined, which may take weeks.