Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Surface test warnings in CI output/summaries #1054

Open
briantist opened this issue Sep 10, 2023 · 0 comments
Open

Surface test warnings in CI output/summaries #1054

briantist opened this issue Sep 10, 2023 · 0 comments
Labels
CI/CD related to CI/CD (not necessarily tests) developer experience Developer setup and experience tests related to tests (not necessarily CI/CD)

Comments

@briantist
Copy link
Contributor

In pytest especially, we can see warnings if we look at the CI output, but we don't have a good way to surface these to PR authors or reviewers. If the tests don't actually fail, there's a good chance no one is looking at the output.

I'd like us to have a way that these warnings get surfaced so that we can all see them, and do something about them before they become errors.

These could be deprecation warnings about things we need to fix at some point (could be in library code or limited to tests), warnings in our code that shouldn't be there, especially if added during a PR, an issue with a dependency, etc.

There are several stages to what we could do and we can start small.

For example we can add job summaries with some output. That still requires someone to look at the job run. But if we're already coming up with markdown output, we could also post a comment to the PR maybe.

The pytest-capture-warnings plugin might be a good way to get at the output itself. The format is flake8 so we could also take advantage of other tools that can render that format.

The trouble with reporting only is that warnings will be promptly ignored. In a PR, showing all warnings including ones that existed before the PR means it's easy to not see new ones.

We could run the tests from before the PR and compare results, but then we double the test runs.


I'd like to get to a point where there are no warnings in the output of pytest, and maybe then we can fail on warnings, but we'd need to figure out the cases where warnings are expected and how to deal with those, like if we're testing that a function warns, we should also be suppressing it from output.

If we can't do that, we may want a way to ensure that any warnings we do see are expected, with like, an ignore file maybe. But this gets complicated and I'd rather not go down that road if possible.

@briantist briantist added CI/CD related to CI/CD (not necessarily tests) tests related to tests (not necessarily CI/CD) developer experience Developer setup and experience labels Sep 10, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CI/CD related to CI/CD (not necessarily tests) developer experience Developer setup and experience tests related to tests (not necessarily CI/CD)
Projects
None yet
Development

No branches or pull requests

1 participant