Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add style and technical guidelines for SIG Scheduling #7475

Merged
merged 2 commits into from Aug 19, 2023

Conversation

alculquicondor
Copy link
Member

These are some common guidelines that we have been using over the years in SIG Scheduling.

Which issue(s) this PR fixes:

Fixes #

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Aug 17, 2023
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: alculquicondor

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. approved Indicates a PR has been approved by an approver from all required OWNERS files. labels Aug 17, 2023
@alculquicondor
Copy link
Member Author

cc @kerthcet @tenzen-y @sanposhiho @Huang-Wei @ahg-g
Anything else you can think of?

/hold

@k8s-ci-robot k8s-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Aug 17, 2023
Copy link
Member

@Huang-Wei Huang-Wei left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks Aldo. Some comments.

sig-scheduling/CONTRIBUTING.md Show resolved Hide resolved
sig-scheduling/CONTRIBUTING.md Show resolved Hide resolved
sig-scheduling/CONTRIBUTING.md Show resolved Hide resolved
proportional to the size of the scope where it is used and inversely
proportional to the number of times that it is used.
- In tests:
- Follow a [DAMP principle](https://stackoverflow.com/a/11837973).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you think we should specifically guide users how to write different kinds of tests? basically we would expect the amount of tests to cover a feature is expected to be UT > integration >> e2e. For reproducing a bug, we would be inclined towards writing a UT or integration test over e2e.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, I'd like to somehow mention about scheduler_perf (in cases of k/k), like share the result if the author thinks that this PR might have an impact on the performance (or the reviewer asks)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added both.

Copy link
Member

@tenzen-y tenzen-y left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@alculquicondor Thanks for this work. I left a few comments as a sub-project maintainer.

sig-scheduling/CONTRIBUTING.md Show resolved Hide resolved
- Use `cmp.Diff` instead of `reflect.Equal`, to provide useful comparisons.
- Avoid creating or using assertion libraries.
Use standard `t.Error` or `t.Fatal`, as necessary.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we may want to mention that in tests, comparing the error using errors.Is() (cmpopts.EquateErrors()) instead of comparing the error message if it is possible.

WDYT?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes :)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The "if it is possible" didn't make it into the merged text.

@tenzen-y: you were probably thinking of packages that define certain errors as part of their API and then testing that really those errors are returned when they should be returned, right? That is what callers of the package check for via errors.Is.

If a package has no defined errors, then a unit test can only check that some error is returned when a failure is expected.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then the question should be why is the error not defined.

But I would be ok having a catch-all error that can be used as value for wantErr when you really don't care.

proportional to the size of the scope where it is used and inversely
proportional to the number of times that it is used.
- In tests:
- Follow a [DAMP principle](https://stackoverflow.com/a/11837973).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also, I'd like to somehow mention about scheduler_perf (in cases of k/k), like share the result if the author thinks that this PR might have an impact on the performance (or the reviewer asks)

- Length: As a rule-of-thumb, the length of a variable name should be
proportional to the size of the scope where it is used and inversely
proportional to the number of times that it is used.
- In tests:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about t.Parallel()? I remember I saw some test refactoring for t.Parallel() (but forgot how things went). Do we want to encourage using it if the test can run safely in parallel?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not to sure about this one. Maybe once we had made some progress running most existing tests in parallel.

@k8s-ci-robot k8s-ci-robot added size/M Denotes a PR that changes 30-99 lines, ignoring generated files. and removed size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Aug 18, 2023
@sanposhiho
Copy link
Member

LGTM, thanks! Leave the real /lgtm to others.

@Huang-Wei
Copy link
Member

/lgtm

Thanks Aldo!

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Aug 19, 2023
@Huang-Wei
Copy link
Member

/unhold

@k8s-ci-robot k8s-ci-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Aug 19, 2023
@k8s-ci-robot k8s-ci-robot merged commit 9b5ee1c into kubernetes:master Aug 19, 2023
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants