I support free and open source software (FOSS) like VLC, Qbittorrent, LibreOffice, Gimp…
But why do people say that it’s as secure or more secure than closed source software?
From what I understand, closed source software don’t disclose their code.
If you want to see the source code of Photoshop, you actually need to work for Adobe. Otherwise, you need to be some kind of freaking retro-engineering expert.
But open source has their code available to the entire world on websites like Github or Gitlab.
Isn’t that actually also helping hackers?
Per Eric S. Raymond “many eyes make all bugs shallow”.
Basically it’s not inherently more secure, but often it’s assumed that enough smart people have looked at it.
But yes all software is going to have vulnerabilities
You live in some Detroit-like hellscape where everyone everywhere 24/7 wants to kill and eat you and your family. You go shopping for a deadbolt for your front door, and encounter two locksmiths:
Locksmith #1 says “I have invented my own kind of lock. I haven’t told anyone how it works, the lock picking community doesn’t know shit about this lock. It is a carefully guarded secret, only I am allowed to know the secret recipe of how this lock works.”
Locksmith #2 says "Okay so the best lock we’ve got was designed in the 1980’s, the design is well known, the blueprints are publicly available, the locksport and various bad guy communities have had these locks for decades, and the few attacks that they made work were fixed by the manufacturer so they don’t work anymore. Nobody has demonstrated a successful attack on the current revision of this lock in the last 16 years.
Which lock are you going to buy?
Or just, you know, move out of Detroit… ¯\_(ツ)_/¯
To keep that metaphor going, if you are online, you are in Detroit.
You’ve reminded me of global chat in every F2P game I’ve played
I hear the real estate in Flint is affordable.
Really? I hear it’s a steel.
Others have mentioned this, but to make sure all context is clear:
- FOSS software is not inherently more secure.
- New FOSS software is probably as secure as any closed source software, because it likely doesn’t have many eyes on it and hasn’t been audited.
- Mature FOSS software will likely have more CVEs reported against it than a closed source alternative, because there are more eyes on it.
- Because of bullet 3, mature FOSS software is typically more secure than closed source, as security holes are found and patched publicly.
- This does not mean a particular closed source tool is insecure, it means the community can’t prove it is secure.
- I like proof, so I choose FOSS.
- Most people agree, which is why most major server software is FOSS (or source available)
- However that’s also because of the permissive licensing.
Also keep in mind that employees of companies that release closed source software are obligated to keep secret any gaping security vulnerabilities. This obligation usually comes with heavy legal ramifications that could be considered “life ruining” for many of us. e.g. Loss of your job plus a lawsuit.
Often, none of the contributors to open source software are associated with each other and therefore have no obligation to keep discovered vulnerabilities a secret. In fact, I would assume that many contributors also actively use the software and have a personal interest in getting security vulnerabilities fixed.
Otherwise, you need to be some kind of freaking retro-engineering expert.
Nah, often software is stupidly easy to breach. Often its an openly accessable database (like recently with the Tea app), or that you can pull other data from the webapp just by incrementing or decrementing the ID in your webrequest (that commonly happened with quite a number of digital contact tracing platforms used during Covid).
Very often the closed source just obscures the screaming security issues.
And yeah, there are not enough people to thorouhly audit all the open source code. But there are more people doing that, than you think. And another thing to mind is, that reporting a security problem with a software/service can get you in serious legal trouble depending on your jurisdicting - justified or not. Corporations won’t hesitate to slap suit you out of existance, if they can hide the problems that way. With open source software you typically don’t have any problems like this, since collaboration and transparency is more baked in into it.
There isn’t a clear divide between open source software and proprietary software anymore due to how complex modern applications are. Proprietary software is typically built on top of open source libraries: Python’s Django web framework, OpenSSL, xz-utils, etc. Basically there isn’t anything safe, and even if you wrote it yourself you could introduce bugs or supply-chain attacks from dependencies.
It’s because anyone can find and report vulnerabilities, while closed source could have some issue behind closed doors and not mention that data is at risk even if they knew
The code being public helps with spotting issues or backdoors.
In practice, “security by obscurity” doesn’t really work. The code’s security should hinge on the quality of the code itself, not on the amount of people that know it.
security by obscurity doesn’t work on its own, but is a single pillar in a multi-faceted security strategy. in the case of FOSS vs closed source, the down sides (not having eyes on it, etc) outweigh the up sides… but writing off security by obscurity (plus other security) in all cases is the wrong approach to take
It also provides some assurance that the service/project/company is doing what they say they are, instead of “trust us”.
Meta has deployed code so criminal that everyone who knew about it should be serving hard jail time (if we didn’t live in corporate dictatorships). If their code were public they couldn’t pull shit like this anywhere near as easily.
The code being public helps with spotting issues or backdoors.
A recent example of this is to see the extent that the TALOS group had to do to reverse engineer Dell ControlVault impacting hundreds of models of Dell laptops. This blog post goes through all of the steps they had to take to reverse engineer things, and they note fortunately there was some Linux support with publicly available shared objects with debug symbols, that helped them reverse the ecosystem. Dell has all this source code, and could have identified these issues much more easily themselves, but didn’t and shipped an insecure product leaving the customers vulnerable.
Yuup. “security by obscurity” relies on the attacker not understanding how software works. Problem is, hackers usually know how software works so that barrier is almost non existent.
Otherwise, you need to be some kind of freaking retro-engineering expert.
And as it turns out, there is a ton of financial motivation for less than ethical people to develop those skills and use them to hack proprietary software. And there is some, but less, financial motivation for ethical people to do the same.
Zero day exploits, aka vulnerabilities that aren’t publicly known, offer hackers the ability to essentially rob people blind.
Open source code means you have the entire globe of developers collaborating to detect and repair those vulnerabilities. So while it’s not inherently more secure, it is in practice.
Exploiting four zero-day flaws in the systems,[8] Stuxnet functions by targeting machines using the Microsoft Windows operating system and networks, then seeking out Siemens Step7 software. Stuxnet reportedly compromised Iranian PLCs, collecting information on industrial systems and causing the fast-spinning centrifuges to tear themselves apart.[3] Stuxnet’s design and architecture are not domain-specific and it could be tailored as a platform for attacking modern SCADA and PLC systems (e.g., in factory assembly lines or power plants), most of which are in Europe, Japan and the United States.[9] Stuxnet reportedly destroyed almost one-fifth of Iran’s nuclear centrifuges.[10] Targeting industrial control systems, the worm infected over 200,000 computers and caused 1,000 machines to physically degrade.
The whole Stuxnet story is fascinating. A virus designed to spread to the whole Internet, and then activate inside a specific Iranian facility. Convinced me that we already live in a cyberpunk world.
“Open source code means you have the entire globe of developers collaborating to detect and repair those vulnerabilities.”
Heartbleed has entered the chat
Exactly. Open source means by design there are more people able to look at the code and therefore more emphasis for those interested in the code to want to make sure it works securely. You can be exploitative and try to keep your hack secret but there’s also a chance that someone else will see the same thing you saw and then patch the code with a PR. Granted it depends on how much the original developer cares about the code to begin with to then accept or write in a patch/fix for the vulnerability that someone else brings up but the example software you listed are larger projects where lots of people have a vested interest in it working securely. For smaller projects or very niche software that have less eyes and interest, open source might not be the most secure.
On the closed source side, the people who are interested in looking for hacks are the ones who are much more motivated to actually exploit vulnerabilities for personal gain. The white hat hackers on the other hand for closed source software are fewer because not having the code available openly means they have to have more motivation (ie the company offering bounties/incentives because they care about security) to actually try to work out how the closed source software works.
Because more eyes spot more bugs, supposedly. I believe it, running closed source software is truly insane
If you want to see the source code of Photoshop, you actually need to work for Adobe. Otherwise, you need to be some kind of freaking retro-engineering expert.
What you’re describing is known as “security through obscurity”, the practice of attempting to increase security of a system by hiding the way the system works. This practice is highly discouraged, as it is known to not actually be effective at increasing the security of a system.
Security by obscurity alone is discouraged and not recommended by standards bodies. The National Institute of Standards and Technology (NIST) in the United States recommends against this practice: “System security should not depend on the secrecy of the implementation or its components.”
https://en.wikipedia.org/wiki/Security_through_obscurity#Criticism
Isn’t that actually also helping hackers?
No, by sharing the implementation details of the system, it helps those trying to keep it secure by allowing anyone to inspect, discover, and contribute fixes to security flaws.
Open-source software is not perfect and is suceptible to security flaws and vulnerabilities, but it is better and more secure than closed-source software in every way. Every risk that applies to open-source software also applies to closed-source software, but worse.
Exploits in a lot of closed source software are from really stupid/simple things they’d get ridiculed for if the code were open.
In other words, I think being open creates “pressure” for code to be presentable and auditable. That, and there’s tons of opportunity and incentive for dysfunction with closed source stuff, like sitting on known exploits.
…That being said, it isn’t universal. Is a lone hero dev maintaining some open library going to be more effective at security coverage than a huge commercial team? Probably not.
Does the software for nuclear bomb security need to be public? Probably not.
Isn’t that actually also helping hackers?
Evil hackers don’t need help and don’t want help.
On the other side, there have been cases where evil programmers have brought malicious code into open source software, and it got found out because that code is public, and it got repaired and reported publicly.
Shame on these hackers.
I don’t think you’ll get much shame out of hackers. Most are incorrigible trolls or politically motivated (and occasionally both); their moral centres are non-existent or aligned very differently.
You might get an apology from the odd individual who was on the cusp of growing up anyway, but everyone else is a dead loss.
It doesn’t literally mean that everyone that uses OSS will inspect the source code for vulnerabilities, most don’t even have the skill to do so.
It’s more secure because access to source facilitates exploiting it, and patching it, faster, and because nerds that do have the skills and find something unusual will delve into the code to debug it. The XZ Utils back door was found by one of such nerds doing beta testing, it didn’t even get to be distributed to general users.
It’s a telling sign that malicious actors nowadays are surreptitiously trying to compromise OSS through supply chain attacks instead of directly finding zero days. For example: StarDict sends X11 clipboard to remote servers
Xz is such a great example of how open source is more resilient, and how much “core open source” project need a foundation supporting them