Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Model validation #368

Open
sbfnk opened this issue Feb 9, 2023 · 4 comments
Open

Model validation #368

sbfnk opened this issue Feb 9, 2023 · 4 comments
Labels
enhancement New feature or request help wanted Extra attention is needed

Comments

@sbfnk
Copy link
Contributor

sbfnk commented Feb 9, 2023

At the moment in the tests we only validate the model itself in a few specificy ways (e.g. update_infectiousness, generate_infections). There is also the synthetic validation but it requires a manual step of figure checking etc. It might be good to have a test where the exact output of a model run (with a set random seed) is checked for equality with the expectation.

As an example, PR #150 introduced a bug (fixed in a1885c5) that would have had drastic impact on outputs but passed all the tests and showed up somewhat coincidentally in the checks.

@seabbs
Copy link
Contributor

seabbs commented Feb 9, 2023

Forecast.vocs and epinowcast both have examples of approaches to doing this that might help when designing an approach here.

Runtime constraints and stochastic variation are both things that need to be considered when testing the complete model.

An option we could use would be to test the CRPS in the synthetic validation and throw warnings if changing based on some benchmark. This would be better than what we currently have but still not ideal.

@seabbs seabbs added enhancement New feature or request help wanted Extra attention is needed labels Feb 9, 2023
@seabbs
Copy link
Contributor

seabbs commented Feb 9, 2023

The new touchstone setup could also be helpful here (it's primary use case is testing runtimes) but this is not quite working at the moment.

@sbfnk
Copy link
Contributor Author

sbfnk commented Feb 10, 2023

Runtime constraints and stochastic variation are both things that need to be considered when testing the complete model.

If setting a seed we shouldn't get stochastic variation, right?

An option we could use would be to test the CRPS in the synthetic validation and throw warnings if changing based on some benchmark. This would be better than what we currently have but still not ideal.

I agree, that is a good idea.

@seabbs
Copy link
Contributor

seabbs commented Feb 13, 2023

If setting a seed we shouldn't get stochastic variation, right?

I've struggled in the past to make stan be deterministic but also there is a question of meaningful stochastic variation (i.e when we make algs unstable but on average faster).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

2 participants