I am using unattended-upgrades across multiple servers. I would like updates to be rolled out gradually, either randomly or to a subset of test/staging machines first. Is there a way to do that?
An obvious option is to set some machines to update on Monday and the others to update on Wednesday, but that only gives me only weekly updates…
The goal of course is to avoid a Crowdstrike-like situation on my Ubuntu machines.
My suggestion is to use system management tools like Foreman. It has a “content views” mechanism that can do more or less what you want. There’s a bunch of other tools like that along the lines of Uyuni. Of course, those tools have a lot of features, so it might be overkill for your case, but a lot of those features will probably end up useful anyway if you have that many hosts.
With the way Debian/Ubuntu APT repos are set up, if you take a copy of
/dists/$DISTRO_VERSION
as downloaded from a mirror at any given moment and serve it to a particular server, that’s going to end up withapt update && apt upgrade
installing those identical versions, provided that the actual package files in/pool
are still available. You can set up caching proxies for that.I remember my DIY hodgepodge a decade ago ultimately just being a daily cronjob that pulls in the current distro (let’s say
bookworm
) and their associated-updates
and-security
repos from an upstream rsync-capable mirror, then after checking a killswitch and making sure things aren’t currently on fire, it doesrsync -rva tier2 tier3; rsync -rva tier1 tier2; rsync -rva upstream/bookworm tier1
. Machines are configured to pull and update from tier1 (first 20%)/tier2 (second 20%)/tier3 (rest) appropriately on a regular basis. The files in/pool
were served by apt-cacher-ng, but I don’t know if that’s still the cool option nowadays (you will need some kind of local caching for those as old files may disappear without notice).Thanks, that sounds like the ideal setup. This solves my problem and I need an APT mirror anyway.
I am probably going to end up with a cronjob similar to yours. Hopefully I can figure out a smart way to share the
pool
to avoid download 3 copies from upstream.