Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding snallygaster to scans #8

Open
security-companion opened this issue Feb 20, 2022 · 11 comments
Open

Adding snallygaster to scans #8

security-companion opened this issue Feb 20, 2022 · 11 comments

Comments

@security-companion
Copy link

Hi,
I don't know if you have heard of snallygaster. It's a tool that allows detecting hidden but sensitive files.
see https://github.com/hannob/snallygaster

Might it be an option to add it to siwecos?
If you agree I could help with a pull request.
Greetings

@SniperSister
Copy link
Member

@security-companion first of all, thanks for the suggestion, sounds like an interesting addition!

I quickly looked at the code and have a number of concerns, mainly from our previous experience with siwecos:

  1. it looks like snallygaster doesn't properly check, if a site doesn't have a proper 404 behavior. We stumbled upon many real-world examples, where sites are outputting "not found" pages with 200 as a status code, causing false positives in such detections
  2. it looks like snallygaster lacks a request throtteling mechanism. Many hosts run WAF systems, that block "unusual" request patterns, especially if they exceed a specific number of requests / second
  3. it looks like the total number of requests made is pretty big - if we start throtteling requests (see 2) it might mean that a scan requires considerable time. That's why I'm wondering if we can limit the tests to the most important ones?

What do you think?

@security-companion
Copy link
Author

security-companion commented Feb 21, 2022

Hi SniperSister,
thanks for looking into my suggestion.

  1. Have you seen the method check404() in snallygaster? Could you give an example of what you mean. Perhaps it would be helpful to open an issue on the snallygaster-repo to discuss this with the author of the tool about how this could be solved.
  2. Yes, I agree on that. WAFs definitively would be a problem. A solution would be to pause between 2 requests so that the WAF doesn't block the connection. One option could extend snallygaster with a wait-time variable. Another option might be to write a wrapper in siwecos that calls snallygaster several times but each time with a different value of the test name (see option -t). Then a wait time could be defined between the calls of snallygaster within the wrapper. Although by specifying let's say an array with the possible test names in the wrapper you would need to watch for new releases of snallygaster and extend the array with new test names once new tests are added to snallygaster.
  3. Yes, there are quite a few requests that would be made and time extends if you have to wait between the requests to avoid WAF-triggering. Perhaps SIWECOS-architecture can be adapted to that results are shown in 2 steps, in first part the quick ones and in second part the ones that take longer to finish.

@SniperSister
Copy link
Member

Have you seen the method check404() in snallygaster? Could you give an example of what you mean. Perhaps it would be helpful to open an issue on the snallygaster-repo to discuss this with the author of the tool about how this could be solved.

Ah sorry, overlooked that one :) forget my remark.

A solution would be to pause between 2 requests so that the WAF doesn't block the connection. One option could extend snallygaster with a wait-time variable.

That's my preferred solution, especially as I actually see a value for the upstream script too.

Perhaps SIWECOS-architecture can be adapted to that results are shown in 2 steps, in first part the quick ones and in second part the ones that take longer to finish.

The current architecture wait's for all scan results before marking a scan as "complete" and adjusting this is non-trivial, as a scanner result is passed through various services until it reaches the end user.
Do you have any experience how many requests are being made in a typical snallygaster scan? That would give us an idea how big that issue is and if there even is one at all.

@security-companion
Copy link
Author

security-companion commented Feb 22, 2022

Related to the wait time I opened an issue on the snallygaster repo, see issue #69

I tested one of my sites that are on a hosted webspace and the scan took around 37 seconds. One scan consists of around 108 tests. All tests together make around 720 requests to the webserver. In default mode Snallygaster does all requests twice (one for http and one for https) so if only https-tests are done only 360 requests would be made.

Edit: Per default the www version and the not-www version is checked, limiting the scan to one of them would additionally decrease request size (but increase risk of not seeing an issue by only scanning one of the 2).

Example:
the test for adminer results in 4 requests:
http://mydomain.de/adminer.php
https://mydomain.de/adminer.php
http://www.mydomain.de/adminer.php
https://www.mydomain.de/adminer.php

@SniperSister
Copy link
Member

Edit: Per default the www version and the not-www version is checked, limiting the scan to one of them would additionally decrease request siz

If you input domain.tld as your scan target in SIWECOS, we'll try to scan https://domain.tld and fall back to http://domain.tld if https is not available. That would mean that indeed only one of the 4 domains that you mentioned in your example would be scanned, reducing the number of requests to approx. 180, right?

@security-companion
Copy link
Author

security-companion commented Feb 23, 2022

If you only test https://domain.tld and not https://www.domain.tld then you only have 180 requests.
(If you test https://domain.tld and https://www.domain.tld then you have 360 requests.)

@SniperSister
Copy link
Member

Ok, 180 requests with 100ms throttle time and 200ms response time boils down to a minute of scanning time. Not great, but also not terrible. So, that should work

@security-companion
Copy link
Author

I agree on that.
Do all the sub-scans (eg. scan for open port) start at the same time and run in parallel? I suppose that a scan finishes until the last subscan finishes? Currently how long does the longest sub-scan take?

What would be the next steps to integrate snallygaster (regarding implementation)? I saw that you have one repo for each sub-scan (eg. one for port scans).

@SniperSister
Copy link
Member

Do all the sub-scans (eg. scan for open port) start at the same time and run in parallel?

Yes, they run in parallel

I suppose that a scan finishes until the last subscan finishes? Currently how long does the longest sub-scan take?

The slowest scan is the CMS version scanner, that takes about 90s to complete

What would be the next steps to integrate snallygaster (regarding implementation)?

We need the throtteling opion in snallygaster, once that is added, we need piece of software implementing the SIWECOS Scanner API, that connects SIWECOS with snallygaster.

@security-companion
Copy link
Author

Okay, thank you very much for the explanation.

@security-companion
Copy link
Author

Hi,
I made a pull request on the snallygaster-repo (see hannob/snallygaster#71). Is this somehow what you thought about or were you expecting something different?
Greetings

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants