Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor Jetstream client tests #10372

Open
mikewilli opened this issue Mar 6, 2024 · 0 comments
Open

Refactor Jetstream client tests #10372

mikewilli opened this issue Mar 6, 2024 · 0 comments

Comments

@mikewilli
Copy link
Contributor

mikewilli commented Mar 6, 2024

There is a lot of static test data that turns any slight schema change into (at best) a huge multi-thousand-line PR, or (at worst) a nightmare of unclear test data updates. We should try to refactor this situation into something more manageable.

I think that the ideal outcome of this would be to have well-defined schemas in Pydantic, and use the library that the schemas package already has to generate random test data as a sort of fuzzing test. Then we can also have a folder of known good and bad JSON files of the inputs and make sure they pass or fail during ingestion, and produce the expected output. When there are new failures in production we can add those to the known good folder and update ingestion to work with them, and then the tests will pick this up automatically. This is basically how nimbus-shared and now nimbus-schemas works for testing other shared schemas.

┆Issue is synchronized with this Jira Task

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant