Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enabled code coverage measurement #280

Draft
wants to merge 2 commits into
base: oe_port
Choose a base branch
from
Draft

Enabled code coverage measurement #280

wants to merge 2 commits into from

Conversation

jxyang
Copy link
Contributor

@jxyang jxyang commented May 20, 2020

This enables code coverage with a build config, namely:

make CODE_COVERAGE=true

Once built sgx-lkl this way, the script .azure-pipelines/scripts/measure_code_cov.sh is used to run test cases under tests and the final accumulated code coverage data is posted to sgx-lkl/total_cov.info.

Copy link

@mingweishih mingweishih left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM with just comments on the hard-coded paths.
Since this PR would take dependency on the libgcc/libgcov and the Makefiles of sgx-musl, I assume those will be taken care of before this PR goes in.

Also, the lcov will be the prerequisite of the measurement. CI/CD should handle that as well.

The follow-up will be converting the lcov report to other form so that the devops pipeline can consume. One suggestion would be cobertura, and the conversion can be done with the python tool.


# Gather all necessary files for lcov
cp -r $SGXLKL_ROOT/src/* $SGXLKL_ROOT/cov
sudo cp -r img/home/xuejun/sgx-lkl/build_musl $SGXLKL_ROOT/cov

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the hard-coded path needs to be changed.

# Gather all necessary files for lcov
cp -r $SGXLKL_ROOT/src/* $SGXLKL_ROOT/cov
sudo cp -r img/home/xuejun/sgx-lkl/build_musl $SGXLKL_ROOT/cov
sudo cp -r img/home/xuejun/sgx-lkl/sgx-lkl-musl $SGXLKL_ROOT/cov

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here.

Copy link
Collaborator

@hukoyu hukoyu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few comments made.

# Get the timeout from the test module
DEFAULT_TIMEOUT=300
timeout=$(make gettimeout 2> /dev/null)
[[ $? != 0 ]] && timeout=$DEFAULT_TIMEOUT
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

-ne will be better for number comparison

timeout --kill-after=$(($timeout + 15)) $timeout make run-hw
make_exit=$?

if [ $make_exit != 0 ]; then
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

-ne

timeout --kill-after=$(($timeout + 15)) $timeout make run-sw
make_exit=$?

if [ $make_exit != 0 ]; then
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

-ne

config.mak Show resolved Hide resolved
@@ -0,0 +1,31 @@
#!/bin/bash
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it makes sense that there are new scripts which duplicate the non-coverage test logic. Why not extend the existing scripts and add an extra environment variable to enable it?

Copy link
Contributor Author

@jxyang jxyang May 26, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are key differences between this script and the test runner script.

  • This script is conditioned on the fact that sgx-lkl was built with CODE_COVERAGE=1
  • We ideally want to include samples in the code coverage measurement
  • For each test/sample, we run run-hw and run-sw back to back, and extract the collective coverage data from the image.
  • The objective is to measure code coverage. So the coverage data for failed test are included in the final aggregated code coverage metrics.

If you compare this script and test_runner, also measure_one_cov with run_test, there aren't many duplicated code. I prefer to keep them separate for easier maintenance.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for explaining. I'm trying to think how this fits into CI. Given that debug/release build modes have slightly different code paths it may make sense to capture code coverage in both modes and merge it. Or we just do it for one mode if the differences are not worth measuring. Either way, it would mean adding at least one new build job with CODE_COVERAGE=1 and corresponding test/coverage job(s).
You're saying that your script runs both hw and sw. This is a bit unfortunate because it prevents running sw on non-SGX machines and in parallel. Would it be possible to run them separately, dump the coverage files and merge them in a follow-up step?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's hard to merge coverage data from two VMs. It requires a global vm for aggregation. We will leave the work of integrating into pipeline in another PR.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's actually not that hard to do, you can specify dependencies between jobs and use build artifacts to upload/download job data.

@jxyang jxyang marked this pull request as draft August 4, 2020 00:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants