Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request Retrial based on specific matchers (status code, regex, etc) #733

Open
AraCoders opened this issue Sep 25, 2023 · 2 comments
Open
Labels
enhancement New feature or request

Comments

@AraCoders
Copy link

AraCoders commented Sep 25, 2023

Hi,

Sometimes when fuzzing with ffuf and having millions of requests, due to instability in the network state (or the server being fuzzed), some requests starts to timeout throwing 500 (501,502 or similar) status codes. I was solving this by saving the output and re-run ffuf again against them, but it was time consuming. So, what i think of now is something like
--retry-on-status-code "Specify a list of http statuses in which the request will be retried".

the user can then run his normal ffuf command with --retry-on-http-error=500,501,502,503,504 and requests will be retried automatically. i also think it will be better if the value of --retry-on-http-error accepts a "range" of HTTP status codes, not individual ones (that will make the command shorter, and more readable).

this can even be expanded to: keep retrying a request until the request returns a specific "matcher".
the matcher may be a "status code" (200OK), or a "regex" (something in the content of the page). so instead of --retry-on-status-code only, you can use something like (--retry-on-regex='Request Timeout') which will handle more cases.

Use cases for this:

of course the number of retrials should be controlled by the user.

Thanks.

@bsysop
Copy link
Collaborator

bsysop commented Sep 26, 2023

Hi @AraCoders,
Indeed is a great idea, just brainstorming:

If you get many HTTP 500, it likely means the server is starting to have stability problems due to the fuzzing and eventually it will die, repeating the same conditions eventually will just kill the server, so lowering the request rates would make more sense.

  • Auto-Pause the fuzzing for 60~120 seconds so the server can restore
  • Auto lower the fuzzing request-rate by 50%

What do you think?

@bsysop bsysop added the enhancement New feature or request label Sep 26, 2023
@AraCoders
Copy link
Author

Hi @bsysop,

I was using the first solution using the interactive mode (pausing some time then continue), but it required some manual work. and lowering the fuzzing request-rate will make the fuzzing process take a ton of time.

in this specific case I was bruteforcing a CDN, and it had a ton of backend hosts. for a 30million request, ~100k may timeout and return 500. i mean, if a failed request emerged from backed server "X", maybe during retrial it reaches to server "Y" or "Z".

I think adding a pause/delay (e.g., 60~120 seconds) then retrying the failed requests encountered before the pause (i.e. like adding them to the front of the queue) will solve this problem.
the user should have control over:

  • number of retrials.
  • how many seconds to pause.
  • how is a request considered a failed request. (for example based on a status code 500x)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants