I have been thinking about self-hosting my personal photos on my linux server. After the recent backdoor was detected I’m more hesitant to do so especially because i’m no security expert and don’t have the time and knowledge to audit my server. All I’ve done so far is disabling password logins and changing the ssh port. I’m wondering if there are more backdoors and if new ones I made I can’t respond in time. Appreciate your thoughts on this for an ordinary user.
Check the source or pay someone to do it.
If you’re using closed source software, its best to assume it has backdoors and there’s no way to check.
Security is not a wall. It is a maze.
Ah shit we are back to “Ken Thompson Compiler Hack” again
for those unfamiliar: Reflections on trusting trust by Ken Thompson
I do IT security for a living. It is quite complicated but not unrealistic for you to DIY.
Do a risk assessment first off - how important is your data to you and a hostile someone else? Outputs from the risk assessment might be fixing up backups first. Think about which data might be attractive to someone else and what you do not want to lose. Your photos are probably irreplaceable and your password spreadsheet should probably be a Keepass database. This is personal stuff, work out what is important.
After you’ve thought about what is important, then you start to look at technologies.
Decide how you need to access your data, when off site. I’ll give you a clue: VPN always until you feel proficient to expose your services directly on the internet. IPSEC or OpenVPN or whatevs.
After sorting all that out, why not look into monitoring?
Fun fact, you can use let’s encrypt certs on a internal environment. All you need is a domain.
Just be aware that its an information leakage (all your internal DNS names will be public)
Not if you setup a internal dns
How would that prevent this? To avoid cert errors, you must give the DNS name to let’s encrypt. And let’s encrypt will add it to their public CT log.
Sorry I though you were referring to IP leakage. Apologizes
…which shouldn’t be an issue in any way. For extra obscurity (and convenience) you can use wildcard certs, too.
Are wildcard certs supported by LE yet?
Have been for a long time. You just have to use the DNS validation. But you should do that (and it’s easy) if you want to manage “internal” domains anyway.
Oh, yeah, idk. Giving API access to a system to modify DNS is too risky. Or is there some provider you recommend with a granular API that only gives the keys permission to modify TXT and .well-known (eg so it can’t change SPF TXT records or, of course, any A records, etc)
What you can (and absolutely should) do is DNS delegation. On your main domain you delegate the
_acme-challenge.
subdomains with NS records to your DNS server that will do cert generation (and cert generation only). You probably want to run Bind there (since it has decent and fast remote access for changing records and other existing solutions). You can still split it with separate keys into different zones (I would suggest one key per certificate, and splitting certificates by where/how they will be used).You don’t even need to allow remote access beyond the DNS responses if you don’t want to, and that server doesn’t have anything to do with anything else in your infrastructure.
Use software with an active community, don’t install things you don’t need, update regularly, and be thankful that you probably aren’t worth using a zero-day backdoor on. Your telecom provider, on the other hand, might be - but there’s not much you can do about that!
How do you know there isn’t a logic bug that spills server secrets through an uninitialized buffer? How do you know there isn’t an enterprise login token signing key that accidentally works for any account in-or-out of that enterprise (hard mode: logging costs more than your org makes all year)? How do you know that your processor doesn’t leak information across security contexts? How do you know that your NAS appliance doesn’t have a master login?
This was a really, really close one that was averted by two things. A total fucking nerd looked way too hard into a trivial performance problem, and saw something a bit hinky. And, just importantly, the systemd devs had no idea that anything was going on, but somebody got an itchy feeling about the size of systemd’s dependencies and decided to clean it up. This completely blew up the attacker’s timetable. Jia Tan had to ship too fast, with code that wasn’t quite bulletproof (5.6.0 is what was detected, 5.6.1 would have gotten away with it).
In the coming weeks, you will know if this attacker recycled any techniques in other attacks. People have furiously ripped this attack apart, and are on the hunt for anything else like it out there. If Jia has other naughty projects out here and didn’t make them 100% from scratch, everything is going to get burned.
I think the best assurance is - even spies have to obey certain realities about what they do. Developing this backdoor costs money and manpower (but we don’t care about the money, we can just print more lol). If you’re a spy, you want to know somebody else’s secrets. But what you really want, what makes those secrets really valuable, is if the other guy thinks that their secret is still a secret. You can use this tool too much, and at some point it’s going to “break”. It’s going to get caught in the act, or somebody is going to connect enough dots to realize that their software is acting wrong, or some other spying-operational failure. Unlike any other piece of software, this espionage software wears out. If you keep on using it until it “breaks”, you don’t just lose the ability to steal future secrets. Anybody that you already stole secrets from gets to find out that “their secrets are no longer secret”, too.
Anyways, I think that the “I know, and you don’t know that I know” aspect of espionage is one of those things that makes spooks, even when they have a God Exploit, be very cautious about where they use it. So, this isn’t the sort of thing that you’re likely to see.
What you will see is the “commercial” world of cyberattacks, which is just an endless deluge of cryptolockers until the end of time.
Afaik, most phones are backdoored that can be abused using tools like “pegasus” which led to a huge indignation in Hungary. I don’t belive PCs are exceptions. Intel ME is a proprietary software inside the CPU, often considered as a backdoor in Intel. AMD isn’t an exception. It’s even weirder that Intel produces chips with ME disabled for governments only.
They are not produced for governments only, almost every consumer-grade CPU can have its ME disabled or at least scuttled, thanks to efforts like me_cleaner!
Reading the source code for everything running on your machine and then never updating is the only way to be absolutely 100% sure.
Even with that you will miss something
This is a sliver of one patch, there is a bug here that disabled a build tool that breaks the attack. Can you find it?
hint
It is one singular character. Everything else is fine.
Solution
It’s the dot on line 9
Maybe
Dot after include
Even if there are nation state level backdoors, your personal server is not a valuable enough target to risk exposing them. Just use common sense, unattended-upgrades, and don’t worry too much about it.
@mfat It’s the old problem about bugs. To know that a piece of software has no bugs you should be able to count them and if you could do it then should be able to locate them and make a fix. But you can’t then there’s no way to know there’s no more undetected backdoor
Of course being open source helps a lot but there’s no solver bullet
If backdoors exist, they’re probably enough to get your data no matter where it’s stored, so self hosting should be fine. Just keep it up to date and set up regular automatic backups.
We don’t know. But if there were well known backdoors to mainstream security practices we might see more companies that depend on security shutting down, or at least shutting down their online activities. Banks, stock trading, crypto exchanges, other enterprises that handle money, where hacking would be lucrative.
There’s a concept of acceptable levels of risk. Companies are not going to shut down out of fear, or miss out on the business opportunities of online presence. There’s money to be made.
Even with things as serious as spectre allowing full dumping of CPU and RAM contents simply by loading a website, I can’t think of a single company that just said “well shit, better just die”.
Serious, potentially business ending, security issues usually have a huge amount of effort when discovered put into mitigations and fixes. Mitigations are usually enough in the immediate “oh shit” phase. Defense in depth is standard practice.
I don’t think you need to worry about backdoors with most of those. Worry more about unfixed security holes due to an extreme emphasis on “stability” as in using old versions when fixes have already been released when it comes to anything hosted by large companies.
There are several known instaces of crypto exchages getting hacked.
“We don’t” is the short answer. It’s unfortunate, but true.
you don’t and will never will. I would recommend reading a lecture by Ken Thompson the co-creator of Unix for more details on this https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_ReflectionsonTrustingTrust.pdf
Good question. I have asking myself the same thing as well. In case of ssh it is possible to use 2FA with a security key, which is something I’d like to put in my todo.txt