I’ve spent some time searching this question, but I have yet to find a satisfying answer. The majority of answers that I have seen state something along the lines of the following:
- “It’s just good security practice.”
- “You need it if you are running a server.”
- “You need it if you don’t trust the other devices on the network.”
- “You need it if you are not behind a NAT.”
- “You need it if you don’t trust the software running on your computer.”
The only answer that makes any sense to me is #5. #1 leaves a lot to be desired, as it advocates for doing something without thinking about why you’re doing it – it is essentially a non-answer. #2 is strange – why does it matter? If one is hosting a webserver on port 80, for example, they are going to poke a hole in their router’s NAT at port 80 to open that server’s port to the public. What difference does it make to then have another firewall that needs to be port forwarded? #3 is a strange one – what sort of malicious behaviour could even be done to a device with no firewall? If you have no applications listening on any port, then there’s nothing to access. #4 feels like an extension of #3 – only, in this case, it is most likely a larger group that the device is exposed to. #5 is the only one that makes some sense; if you install a program that you do not trust (you don’t know how it works), you don’t want it to be able to readily communicate with the outside world unless you explicitly grant it permission to do so. Such an unknown program could be the door to get into your device, or a spy on your device’s actions.
If anything, a firewall only seems to provide extra precautions against mistakes made by the user – rather than actively preventing bad actors from getting in. People seem to treat it as if it’s acting like the front door to a house, but this analogy doesn’t make much sense to me – without a house (a service listening on a port), what good is a door?
Not necessarily. An application layer firewall, for example, could certainly get in the way of it trying to send data externally.
Are you referring to a service leaving a port open that can be connected to from the network?
I’m definitely curious about the outcome of this – Matrix especially. Perhaps the new/alternative servers function a bit better now, but I’ve heard that, for synapse at least, Matrix can be very demanding on hardware to run (from what I’ve heard, the issues mostly arise when one joins a larger server).
Interesting. Do you mean “held responsible” to simply stop the disruption, or “held responsible” for the actions of/damaged caused by the disruption?
I think an Application Layer Firewall usually struggles to do more than the utmost basics. If for example my Firefox were to be compromised and started not only talking to Firefox Sync to send the history to my phone, but also send my behavior and all the passwords I type in to a third party… How would the firewall know? It’s just random outgoing encrypted traffic from its perspective. And I open lots of outbound connections to all kinds of random servers with my Firefox. Same applies to other software. I think such firewalls only protect you once you run a new executable and you know it has no business sending data. If software you actually use were susceptible to attack, the firewall would need to ask you after each and every update of Firefox if it’s still okay and you’d really need to verify the state of your software. If you just click on ‘Allow’ there is no added benefit. It could protect you from connecting to a list of known malicious addresses and from people smuggling new and dedicated malware to your computer.
I don’t want to say doing the basics is wrong or anything. If I were to use Windows and lots of different software I’d probably think about using an Application Level Firewall. But I don’t see a real benefit for my situation… However I’d like Linux to do some more sandboxing and asking for permissions on the desktop. Even if it can’t protect you from everything and may not be a big leap for people who just click ‘Accept’ for everything, it might be a good direction and encourage more fine-granularity in the permissions and ways software ties together and interacts.
I mean your webserver or CMS or your browser has a vulnerability and that gets exploited and you get hacked. The webserver has open ports anyways in order to be able to work at all. The CMS is allowed to process requests and the browser allowed to talk to websites. A maliciously crafted request or answer to your software can trigger it to fail and do something that it shouldn’t do.
Sure, I have a Synapse Matrix server running on my YunoHost. It works fine for me. I’m going to install Dendrite or the other newer one next. I’m not complaining if I can cut down memory consumption and load to the minimum.
Yeah, the issue was that it meant both. You were part of the crime, you were involved in the causality and linked to the damages somehow. Obviously not to the full extend, since you didn’t do it yourself, but more than ‘don’t allow it to happen again’. Obviously that has consequences. And I think now it’s not that any more when it comes to wifi. I think now it’s just the first, plus they can ask for a fixed amount of money since by your negliect, you caused their lawyer to put in some effort.
If it’s going to some undesirable domain, or IP, then you can block the request for that application. The exact capabilities of the application layer firewall certainly depend on the exact application layer firewall in question, but this is, at least, possible with OpenSnitch.
For the actual content of the traffic, is this not the case with essentially all firewalls? They can’t see the content of te traffic if it is using TLS. You would need to somehow intercept the packet before it is encrypted on the device. I’m not aware of any firewall that has such a capability.
The exact level of fine-grain control heavily depends on the application layer firewall in question.
Interesting.
I do, perhaps, somewhat understand this argument, but it still feels quite ridiculous to me.