Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Consideration] Principle of adopting analysis tools #199

Closed
MoonkiHong opened this issue Dec 13, 2020 · 5 comments
Closed

[Consideration] Principle of adopting analysis tools #199

MoonkiHong opened this issue Dec 13, 2020 · 5 comments
Assignees
Labels
help wanted Extra attention is needed question Further information is requested

Comments

@MoonkiHong
Copy link
Contributor

Originally suggested by @mgjeong (#193 (comment))

We need to establish the principle in adopting analysis tools for security vulnerability, code quality, and so on, which is consent from all our community developers. One good example is LGTM, which is currently adopted as validating security vulnerabilities in our project.

Once we complete establishing the principle, then we will make a decision to keep our current adopted tools or to take an alternative (including LGTM).

@MoonkiHong MoonkiHong added help wanted Extra attention is needed question Further information is requested labels Dec 13, 2020
@MoonkiHong MoonkiHong self-assigned this Dec 13, 2020
@tdrozdovsky
Copy link
Contributor

In my opinion, good code analysis systems will help us improve the quality of our product. Before adding any system, we can raise the issue and discuss it. I think a good reference point would be the collaboration/use of the practices described at OpenSSF.
I also suggest analyzing the checklist taken from this organization:

The following checks are all run against the target project:

Name Description
Security-MD Does the project contain a security policy?
Contributors Does the project have contributors from at least two different organizations?
Frozen-Deps Does the project declare and freeze dependencies?
Signed-Releases Does the project cryptographically sign releases?
Signed-Tags Does the project cryptographically sign release tags?
CI-Tests Does the project run tests in CI, e.g. GitHub Actions, Prow?
Code-Review Does the project require code review before code is merged?
CII-Best-Practices Does the project have a CII Best Practices Badge?
Pull-Requests Does the project use Pull Requests for all code changes?
Fuzzing Does the project use OSS-Fuzz?
SAST Does the project use static code analysis tools, e.g. CodeQL, SonarCloud?
Active Did the project get any commits and releases in last 90 days?

Regarding the supported analysis systems in our project:
LGTM (@MoonkiHong added) system has already shown good results and it helps us make more competitive code.
CodeQL (@t25kim added) helps track quality during the pull request stage.

@MoonkiHong @t25kim thank you very much for your contribution for improving our project.

@tiokim
Copy link
Contributor

tiokim commented Dec 14, 2020

@tdrozdovsky Thank you for the valuable idea.

In my opinion, good code analysis systems will help us improve the quality of our product. Before adding any system, we can raise the issue and discuss it.

I fully agree with your idea of discussing some tools usability after opening an issue.
I think it would be nice to apply the tool during the PoC(?) period and see if it should be adopted by checking the feasibility.

@tiokim
Copy link
Contributor

tiokim commented Dec 14, 2020

The following is the result of scorecard at OpenSSF.

RESULTS
-------
Active: Pass 10
CI-Tests: Pass 10
CII-Best-Practices: Pass 10
Code-Review: Pass 10
Contributors: Pass 10
Frozen-Deps: Fail 5
Fuzzing: Fail 10
Pull-Requests: Pass 9
SAST: Fail 9
Security-Policy: Fail 10
Signed-Releases: Fail 10
Signed-Tags: Fail 0

@MoonkiHong
Copy link
Contributor Author

@tdrozdovsky Thank you for the valuable idea.

In my opinion, good code analysis systems will help us improve the quality of our product. Before adding any system, we can raise the issue and discuss it.

I fully agree with your idea of discussing some tools usability after opening an issue.
I think it would be nice to apply the tool during the PoC(?) period and see if it should be adopted by checking the feasibility.

@t25kim Fully agree with you. Thank you for your suggestion!!!

The following is the result of scorecard at OpenSSF.

RESULTS
-------
Active: Pass 10
CI-Tests: Pass 10
CII-Best-Practices: Pass 10
Code-Review: Pass 10
Contributors: Pass 10
Frozen-Deps: Fail 5
Fuzzing: Fail 10
Pull-Requests: Pass 9
SAST: Fail 9
Security-Policy: Fail 10
Signed-Releases: Fail 10
Signed-Tags: Fail 0

@t25kim This is fantastic! Thank you for your proactive analysis through scorecard from OpenSSF.

@tdrozdovsky
Copy link
Contributor

The checklist that I offered above is just overview of the implementation of the ScoreCard, an example of the results of which was provided by the @t25kim (thank you). Therefore, I understand that you (@MoonkiHong, @t25kim) support the direction of ScoreCard use in our project. Now let's wait for the opinion of other development participants.

P.S. I hope that if other maintainers support this idea, we will implement it step by step together.
Thank you again

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Extra attention is needed question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants