You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This tool might be very useful to catch regressions. It does score based on the number of packages which does not have licences, but it doesn't seem to care about what license it has. But still good to catch regressions using a workflow to compare a PR against a master.
I rather like this idea. For a simple first step, we can make it part of our CI. Just generate the scorecard for now right after we generate the sbom and report the results. Do we do things like this for code quality or other CI steps that have quantitative scores?
We generate the sbom (and collected_sources) here, so that probably is the natural place.
Unsure if I would make it part of the make sbom command (probably not) or make it part of GitHub actions as a step (probably yes)
I don't know what part to put in the Makefile and in some workflow file, but it would be good to do it so the full scorecard diffs are captured in the workflow output so one can see the nature of any regression (license regression vs something else)
Use case
This tool might be very useful to catch regressions. It does score based on the number of packages which does not have licences, but it doesn't seem to care about what license it has. But still good to catch regressions using a workflow to compare a PR against a master.
Describe the solution you'd like
I tried this:
Downloaded from
https://github.com/eBay/sbom-scorecard
Then e.g.,
docker run --rm lfedge/eve:10.4.0-kvm-amd64 sbom >/tmp/10.4.0.spdx
sbom-scorecard score --outputFormat json /tmp/10.4.0.spdx | jq .Total.Ratio
0.916791
The text was updated successfully, but these errors were encountered: