New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add style and technical guidelines for SIG Scheduling #7475
Add style and technical guidelines for SIG Scheduling #7475
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: alculquicondor The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
cc @kerthcet @tenzen-y @sanposhiho @Huang-Wei @ahg-g /hold |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks Aldo. Some comments.
proportional to the size of the scope where it is used and inversely | ||
proportional to the number of times that it is used. | ||
- In tests: | ||
- Follow a [DAMP principle](https://stackoverflow.com/a/11837973). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do you think we should specifically guide users how to write different kinds of tests? basically we would expect the amount of tests to cover a feature is expected to be UT > integration >> e2e. For reproducing a bug, we would be inclined towards writing a UT or integration test over e2e.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, I'd like to somehow mention about scheduler_perf
(in cases of k/k), like share the result if the author thinks that this PR might have an impact on the performance (or the reviewer asks)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added both.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@alculquicondor Thanks for this work. I left a few comments as a sub-project maintainer.
- Use `cmp.Diff` instead of `reflect.Equal`, to provide useful comparisons. | ||
- Avoid creating or using assertion libraries. | ||
Use standard `t.Error` or `t.Fatal`, as necessary. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we may want to mention that in tests, comparing the error using errors.Is()
(cmpopts.EquateErrors()
) instead of comparing the error message if it is possible.
WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The "if it is possible" didn't make it into the merged text.
@tenzen-y: you were probably thinking of packages that define certain errors as part of their API and then testing that really those errors are returned when they should be returned, right? That is what callers of the package check for via errors.Is
.
If a package has no defined errors, then a unit test can only check that some error is returned when a failure is expected.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then the question should be why is the error not defined.
But I would be ok having a catch-all error that can be used as value for wantErr
when you really don't care.
proportional to the size of the scope where it is used and inversely | ||
proportional to the number of times that it is used. | ||
- In tests: | ||
- Follow a [DAMP principle](https://stackoverflow.com/a/11837973). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, I'd like to somehow mention about scheduler_perf
(in cases of k/k), like share the result if the author thinks that this PR might have an impact on the performance (or the reviewer asks)
sig-scheduling/CONTRIBUTING.md
Outdated
- Length: As a rule-of-thumb, the length of a variable name should be | ||
proportional to the size of the scope where it is used and inversely | ||
proportional to the number of times that it is used. | ||
- In tests: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about t.Parallel()
? I remember I saw some test refactoring for t.Parallel()
(but forgot how things went). Do we want to encourage using it if the test can run safely in parallel?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not to sure about this one. Maybe once we had made some progress running most existing tests in parallel.
1696b5c
to
89e9132
Compare
LGTM, thanks! Leave the real |
/lgtm Thanks Aldo! |
/unhold |
These are some common guidelines that we have been using over the years in SIG Scheduling.
Which issue(s) this PR fixes:
Fixes #