Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clearer warning when not all tests are ran #459

Merged
merged 2 commits into from Jun 22, 2023

Conversation

Crozzers
Copy link
Contributor

@Crozzers Crozzers commented Jul 19, 2022

This PR fixes #458.

It works by capturing and reading each line of each test runner's stderr in real time. If the line contains a warning it is saved to be re-printed later. Once all of the tests have finished, the warnings are printed to stdout along with the version of Python that raised the warning.

This works by capturing stderr from tests and checking to see if each line contains the string "WARNING:test:".
If a line contains this string it is saved for later and re-printed once all tests have been run.
@nicholasserra
Copy link
Collaborator

I see this is working, but it raises some questions for me around testing in general.

This made me realize that i've been almost always using make testone for running tests locally. This just runs the suite against the current python.

Currently, make test actually finds a bunch of broken python versions on my system and raises a bunch of errors. I just added some fixes locally to get it to even run. And even then, it only ran against my current version lol

Question: have you been running the full testall suite this whole time, and it's been working? I'm been relying on CI mostly for multi version testing.

Wondering if there's a way to pull this out into something more global so both make test and make testone work with it.

Alternatively maybe it's time to finally give a better look at #415

@Crozzers
Copy link
Contributor Author

Whenever I run the test suite locally I use make test which, on my system, runs against Python 3.7-3.10. Python 3.7 and 3.8 don't have pygments installed so those tests are skipped but apart from that, all of the tests appear to pass. I just downloaded Python 3.5 and 3.6 and they pass too. I rely on the CI to run tests for different OSes but I don't usually expect the OS to have an impact on the test results.

In terms of #415, I think moving to a modern, supported testing framework wouldn't be a bad thing, however I do have a couple minor concerns.

  1. Is there a way to run tests across multiple python versions, like with the current system
    • Cloning the pytest PR and running py.test -k "not knownfailure" test only ran on the current python version.
    • This could be solved by using another similar system to testall.py. We could still run into the same issue of warnings/errors being pushed off the screen by subsequent test results but given how much more compact the pytest output is, I don't think it will be as large an issue
  2. Pytest currently supports Python 3.7+. The last version that supported Py3.5 (which this library still lists support for) was v6.1.2 from October 2020
    • Perhaps this library would have to drop support for Py3.5 and Py3.6 since Py3.6 was EOL 7 months ago

Overall, I think it's do-able

@nicholasserra
Copy link
Collaborator

One year re-look:

I think this is fine as-is. I'm going to simply add a commit after merge to move the test.py warnings logs to happen after the test run is complete, so it's apparent when running testone that warnings occurred too

@nicholasserra nicholasserra merged commit e26158c into trentm:master Jun 22, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Clearer warning when not all tests are ran
2 participants