Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

reject-www-data rule unintentionally removed from ip6tables when file contains only IPv4 addresses #76

Open
patch0 opened this issue Jul 31, 2017 · 9 comments
Assignees
Labels

Comments

@patch0
Copy link
Contributor

patch0 commented Jul 31, 2017

When an IPv4 address is added to the reject-www-data rule, the rule is removed from ip6tables.

  1. Run ip6tables -L -v -n and notice the reject-www-data table is present
  2. Add 10.0.0.1 to /etc/symbiosis/firewall/outgoing.d/50-reject-www-data
  3. Run ip6tables -L -v -n and notice the reject-www-data table is no longer present
@patch0 patch0 changed the title www-data rule unintentionally removed from ip6tables when file contains only IPv4 addresses reject-www-data rule unintentionally removed from ip6tables when file contains only IPv4 addresses Jul 31, 2017
@patch0 patch0 added this to the stretch release milestone Jul 31, 2017
@patch0 patch0 self-assigned this Jul 31, 2017
@patch0
Copy link
Contributor Author

patch0 commented Jul 31, 2017

Ugh this is a horrible rule, which breaks convention.

For other rules, the IPs in the file match the intention of the rule, whereas this does the reverse.

For a normal reject rule, the IPs are explicitly rejected, but for this rule, the IPs are added to an ACCEPT rule, which means if no IP is mentioned all of them should be rejected. 🤢

@telyn
Copy link
Contributor

telyn commented Jul 31, 2017

If memory serves the support team aren't too keen on the rule anyway cause it's been implicated in there being a lot of old out of date wordpress installs that get hacked - it's a massive barrier to being able to update/auto-update a bunch of web software.

@patch0
Copy link
Contributor Author

patch0 commented Jul 31, 2017

True. We don't know how useful this rule really is, and lots of people do remove it.

@skx
Copy link
Contributor

skx commented Jul 31, 2017

It would be interesting to find out for sure, or have a poll.

I suspect that it might cause updates to fail - but oftentimes they'd fail anyway due to www-data vs admin UIDS, and when the rule is enabled it will prevent the download of rootkits.

Does make me wonder if my dns-idea would be better though.

@pcammish
Copy link

pcammish commented Aug 1, 2017

when the rule is enabled it will prevent the download of rootkits.

However, if someone has managed to get malicious code on a server, in ~99% of cases it's via a common exploitable (and likely automated) upload script or similar vector, which then allows the execution of that code via www-data.

In those cases, the box is already compromised, and the hostile actor is most likely to continue using that same known-good vector to upload rootkits or whatever else they need to elevate privileges and root the box.

Generally though, the default 50-reject-www-data rule is counter-intuitive, and silently blocks traffic without letting anyone know what broke or where, so causes problems with things like CMSs which expect to be able to connect out for anti-spam, updates, etc, and generally makes the server operate an a non-typical way.

There are probably a few cases where this would be useful (although maybe implemented differently), but as far as the support team are concerned it's generally more of a liability than not.

@iainhallam
Copy link

I've also had issues where linking a CMS to external resources and the CMS tries to cache the resource. I even opened a bug in the plugin and worked with the author to find out what was happening before I got to the firewall. I can see the utility, but is there a blacklist anywhere of "bad" hosts for rootkits and so on that could be used to deny traffic?

@skx
Copy link
Contributor

skx commented Aug 1, 2017

@iainhallam - Bad hosts are so common, and so numerous, it would be futile to even try to maintain such a list. The only sane approach is the reverse - whitelisting known-good hosts (which we do try to do).

@alphacabbage1
Copy link

Fwiw, I've been using the rule since year-zero, initially caught out thinking drupal updates were bug-ridden but re-reading the docs eventually pointed the way. I expect it's a common fault, especially at first-install, so opt-in (replace a default one-liner; "*" [sticking with the unintuitive structure]) and/or higher doc prominence make sense.

With scripts to set tight permissions on CMS installs (config data, tmp & cache folders above htdocs where possible), I'm hoping the firewall rule provides a basic defence or mitigation. Assuming it does, the only problem is the silent fail; emails to root (realtime or hourly-summary) would be hugely helpful.

@ianeiloart
Copy link

I'd vote for disabled by default. It's preventing Wordpress and the like from staying up to date. And that's a worse security risk.

Alternatively, Symbiosis could look for certain popular CMS installations, and open access to the relevant updaters. But that may be harder than it sounds.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

7 participants