Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[WIP] Add pytest-based tests and remove nose tests. #721

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

ketch
Copy link
Member

@ketch ketch commented May 7, 2024

Since nose is dead, this PR begins the switch to using pytest in PyClaw. Some notes:

  • I'm not using the intricate machinery from pyclaw.utils that helped to generate many variants of the same test. That code doesn't immediately work with pytest (which doesn't support yielding tests) and it seems in some ways preferable to have more straightforward (easy to understand) testing code, even if it means being a bit more verbose about the definition of the tests.
  • In addition to the regression tests (which check that the output hasn't changed from what it used to be), I've added some accuracy tests where we run on a reasonably fine grid and check against the exact solution.

@ketch ketch requested review from mandli and rjleveque May 7, 2024 07:01
@mandli
Copy link
Member

mandli commented May 7, 2024

This works well for me.

I am onboard with the idea of making the tests more verbose and also adding accuracy tests where we can. What would be even better would be convergence but I imagine that being a bit tricky.

@rjleveque
Copy link
Member

I also agree it would be best to make the tests more verbose and easier to understand and modify.

I wonder about running convergence tests, since part of the point of the tests is to make them run very quickly so that we do it frequently, and so they can be done automatically for PRs. Testing convergence is of course important, but once it's been determined that a particular test behaves properly in this regard, isn't it enough to test a few things coming out of one quick run to make sure the example hasn't broken in some way due to changes elsewhere? I thought that was the point of these unit tests, not to validate the numerical method.

We should talk about how to make similar changes in other repos, e.g. updating clawpack/amrclaw#279.

A couple other thoughts regarding the Fortran code tests:

  • Rather than redirecting output to a scratch directory, I think it would be simpler if we just sent it to _output as usual. These tests all produce very minimal output, and when things go wrong it would be easier to find the output in the standard location.
  • In the old system, when tests pass, the output directory is deleted. If possible it would be nice to add an option so that the tests can be run with the output retained, which is often useful if trying to do a dual debug to figure what went wrong in a new version where the test does not pass. Sometimes one can just do make .output at the command line to make the output, but sometimes the tests set up some data that is used in the test and then it's not so easy to figure out how to run the test and view the output (or at least that's my memory, admittedly I haven't done this in a while.)

@ketch
Copy link
Member Author

ketch commented May 7, 2024

Thanks for the comments! In the meantime, I did figure out how to setup similar tests with different function arguments in a more streamlined way: https://docs.pytest.org/en/7.1.x/example/parametrize.html

But maybe it's better to just be verbose and explicit. We can discuss it later today.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants