Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there a way to see the code coverage? #465

Open
0xfocu5 opened this issue Feb 19, 2024 · 9 comments
Open

Is there a way to see the code coverage? #465

0xfocu5 opened this issue Feb 19, 2024 · 9 comments

Comments

@0xfocu5
Copy link

0xfocu5 commented Feb 19, 2024

Is there a way to see the code coverage?

@smoelius
Copy link
Member

No, cargo-afl does not provide a way to generate code coverage reports.

@jberryman
Copy link

This is linked from the AFL++ "In Depth" guide, but it looks like a real pain to get running: https://github.com/vanhauser-thc/afl-cov
I'll report back here if I try

@0xfocu5
Copy link
Author

0xfocu5 commented Feb 20, 2024

This is linked from the AFL++ "In Depth" guide, but it looks like a real pain to get running: https://github.com/vanhauser-thc/afl-cov I'll report back here if I try

Looking forward to your good news.

@njelich
Copy link

njelich commented Apr 10, 2024

Going with the trivial approach of llvm-cov flags + cargo afl build provides a few profraw files during compilation, but then none during fuzzing. Any ideas?

@smoelius
Copy link
Member

@njelich It looks like you answered your own question in taiki-e/cargo-llvm-cov#352 (comment)?

It seems sufficient to just add an "external tests" guide, but for AFL - this seems to work with the tutorial fuzzer in the AFL docs:

source <(cargo llvm-cov show-env --export-prefix)
cargo llvm-cov clean --workspace
cargo afl build
AFL_FUZZER_LOOPCOUNT=20 cargo afl fuzz -c - -V 10 -i in -o out target/debug/url-fuzz-target
cargo llvm-cov report --html

And it allows full customization without overhauling the enitrety of the llvm-cov args system.

This is very clever.

However, as my astute colleague @maxammann pointed out to me, this approach records coverage while the fuzzer is running. Ideally, one would fuzz for some amount of time, and then build a coverage report from the generated corpus files.

@njelich
Copy link

njelich commented Apr 11, 2024

This could be made into a script - the fuzzer will automatically stop after x time, and you can make sure the full corpus is used for fuzzing as input, and the time is sufficient to process it. Alternatively, you could use afl run to run all the test cases from the corpus.

Getting the exact coverage for the full corpus. Something like:

for i in afl/default/queue/*; do cargo afl run ../target/debug/fuzz-target < "$i"; done

@maxammann
Copy link

Getting the exact coverage for the full corpus. Something like:

I don't full remember whether LLVMs default coverage collection merges runs. I would suspect that the profraw files might get overwritten. In that case we might need a single Rust program that executes all in a loop.

Ideally the execution of the queue would fork for every input to catch crashes (yes, typically your corpus entries do not crash, however if the SUT has global state this could still happen).

@njelich
Copy link

njelich commented Apr 11, 2024

Merging runs isn't a problem. I find this to be a sufficient solution for practical use.

@njelich
Copy link

njelich commented May 2, 2024

Update, noticing that when trying to reproduce just one test case the .profraw isn't emitted.

Running something like this:
cat in/url | ./target/debug/url-fuzz-target

When comparing to normal runs, I noticed that normal runs do not output coverage unless the AFL_FUZZER_LOOPCOUNT env variable is set to e.g. 20, which limits the afl-fuzz iterations before re-spawning.

So that makes me believe that coverage is emitted when respawning. Getting back closer to your idea @smoelius

I guess normally it doesnt output anything because the N of iterations needed is INT_MAX, but with a low number it consistently emits something as it processes the files.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants