You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Sometimes when fuzzing with ffuf and having millions of requests, due to instability in the network state (or the server being fuzzed), some requests starts to timeout throwing 500 (501,502 or similar) status codes. I was solving this by saving the output and re-run ffuf again against them, but it was time consuming. So, what i think of now is something like
--retry-on-status-code "Specify a list of http statuses in which the request will be retried".
the user can then run his normal ffuf command with --retry-on-http-error=500,501,502,503,504 and requests will be retried automatically. i also think it will be better if the value of --retry-on-http-error accepts a "range" of HTTP status codes, not individual ones (that will make the command shorter, and more readable).
this can even be expanded to: keep retrying a request until the request returns a specific "matcher".
the matcher may be a "status code" (200OK), or a "regex" (something in the content of the page). so instead of --retry-on-status-code only, you can use something like (--retry-on-regex='Request Timeout') which will handle more cases.
Brute forcing API endpoints. I have an API endpoint that I was 100% sure it's there, but due to some weird Load Balancer logic it will return 200OK but never returns the response body (which was normal json). running Burp Intruder over it it returned the response in the 10x trial.
of course the number of retrials should be controlled by the user.
Thanks.
The text was updated successfully, but these errors were encountered:
Hi @AraCoders,
Indeed is a great idea, just brainstorming:
If you get many HTTP 500, it likely means the server is starting to have stability problems due to the fuzzing and eventually it will die, repeating the same conditions eventually will just kill the server, so lowering the request rates would make more sense.
Auto-Pause the fuzzing for 60~120 seconds so the server can restore
I was using the first solution using the interactive mode (pausing some time then continue), but it required some manual work. and lowering the fuzzing request-rate will make the fuzzing process take a ton of time.
in this specific case I was bruteforcing a CDN, and it had a ton of backend hosts. for a 30million request, ~100k may timeout and return 500. i mean, if a failed request emerged from backed server "X", maybe during retrial it reaches to server "Y" or "Z".
I think adding a pause/delay (e.g., 60~120 seconds) then retrying the failed requests encountered before the pause (i.e. like adding them to the front of the queue) will solve this problem.
the user should have control over:
number of retrials.
how many seconds to pause.
how is a request considered a failed request. (for example based on a status code 500x)
Hi,
Sometimes when fuzzing with ffuf and having millions of requests, due to instability in the network state (or the server being fuzzed), some requests starts to timeout throwing 500 (501,502 or similar) status codes. I was solving this by saving the output and re-run ffuf again against them, but it was time consuming. So, what i think of now is something like
--retry-on-status-code "Specify a list of http statuses in which the request will be retried".
the user can then run his normal ffuf command with --retry-on-http-error=500,501,502,503,504 and requests will be retried automatically. i also think it will be better if the value of --retry-on-http-error accepts a "range" of HTTP status codes, not individual ones (that will make the command shorter, and more readable).
this can even be expanded to: keep retrying a request until the request returns a specific "matcher".
the matcher may be a "status code" (200OK), or a "regex" (something in the content of the page). so instead of --retry-on-status-code only, you can use something like (--retry-on-regex='Request Timeout') which will handle more cases.
Use cases for this:
of course the number of retrials should be controlled by the user.
Thanks.
The text was updated successfully, but these errors were encountered: