Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bayesian SINDy #440

Merged
merged 60 commits into from Feb 20, 2024
Merged

Conversation

mikkelbue
Copy link
Collaborator

@mikkelbue mikkelbue commented Dec 13, 2023

Hi all!

This is a (possible) implementation of Bayesian SINDy. It uses numpyro under the hood to drive NUTS sampling.

  • It is implemented as a pysindy.optimizers module, SBR for Sparse Bayesian Regression.
  • Uses the regularised horseshoe prior outlined in Hirsh, et al..
  • Parameters to the SBR optimizer include hyperparameters to the statistical model, i.e. tau_0, nu and s for the hyperprior and lamb for the exponential prior over the measurement noise.
  • Additional kwargs are fed forward to the numpyro.infer.MCMC object.
  • self.coef_ is set to the mean of the posterior samples.
  • Additional MCMC diagnostics can be invoked by inspecting the SINDy.optimizer.mcmc object using arviz.
  • New notebook 16_bayesian_sindy.ipynb demonstrates the method.

This PR is a work in progress in that test and docs are missing. But I thought it would be better to put it in early in the process, in case anyone has questions, ideas or comments with regards to the implementation. Please let me know if this is a bad idea.

Thanks for considering this PR.

TODO:

  • Documentation
  • Tests
  • Improve / polish notebook

@Jacob-Stevens-Haas
Copy link
Collaborator

Jacob-Stevens-Haas commented Dec 14, 2023

Thank you so much for this - I've been interested in getting Hirsch et al's spike slab priors into pysindy for some time! And selfishly, I've wanted a reason to start interacting with jax. We implemented a bit of kernel smoothing from a paper of @jfelipeosorio in derivative, but we shied away from anything that required jax.

Giving this a quick look now (but not yet a full review), here's some more engineering, less mathematical considerations:
Import

  • Add an [project.optional-dependencies] section for the dependencies
  • You noticed the import guard that we did in SR3 for the optional cvxpy dependencies, including the PEP 690 comment. I just checked and it appears PEP 690 was rejected, so you can remove the comment. Someone (maybe me?) needs to do some reading on what the right way to guard import statements.

Example

  • There already is an example 16 at this point
  • See the guidance on creating jupyter notebook examples. TL;DR: Maintaining compatibility with existing notebooks is tough. Obviously unit tests make this easier, but if you want to ensure the notebook itself stays runnable, we need to test it with this code in order for CI to care about it. You'd also need to generate the ipynb with examples/publish_notebook.py. (This part is pretty easy - save the notebook as a .py file, then run python publish_notebook.py XX_Bayesian_SINDy)
  • I like the example a lot - it shows "how to use this" rather than "here's all the different ways you can use this on a variety of systems". We have some people in the lab working on a "Bayesian" SINDy that would definitely appreciate this.
  • Include some LaTeX for the equations
  • I assume the rows of the final plot are terms in the discovered equation - could you label them?

Code

  • Can you add type annotations and docstrings?
  • What is the difference between SBR and SparseBayesianRegression? I can tell the latter is used in the former, but the former being just the abbreviation of the latter doesn't give any indication of the role that these objects play. Maybe rename SBR -> SBROptimizer, but you're probably in the best place to suggest names.
  • Best practice is not necessarily settled, but my stance on code variables that shadow math variables: meaning_letter. e.g. if someone's doing L-1 regression, instead of naming a variable lambda, it would be reg_coef_lambda. It helps people who have read the paper but not recently (me in this example) to still be able to troubleshoot errors.

Ok I lied, some math

  • It's been a while since I read Hirsch, but is this mathematically compatible with EnsembleOptimizer, which subsamples the data and/or library, wraps any BaseOptimizer, and takes the mean/median of resulting models? My feeling is no, which indicates that out typing system needs to expand so as to prohibit this case.

@mikkelbue
Copy link
Collaborator Author

Hi @Jacob-Stevens-Haas

Thanks for your comments and suggestions. I'm happy if this is useful!

I have made a few changes:

  • Added numpyro as an optional dependency.
  • Removed reference to PEP 690.
  • Renamed tau_0 -> sparsity_coef_tau0, nu -> slab_shape_nu, s -> slab_shape_s, lamb -> noise_hyper_lambda.
  • Renumbered the notebook 16 -> 19
  • Moved the methods from SparseBayesianRegression into SBR. I set it up like that because I thought it was neat to have a general-purpose sparse Bayesian regressor, but it's simpler to just make everything part of the optimizer. I'm not precious about naming, just thought SBR would fit into the existing convention.

I'll have a look at your other points when I have more time.

@mikkelbue
Copy link
Collaborator Author

Also, just as a point of order:

This implementation uses the regularised horseshoe prior, not (yet) the spike and slab prior. The regularised horseshoe is computationally more convenient because it is differentiable, so you can just plug the whole shebang into a NUTS sampler.

The spike and slab prior AFAIK requires a compound step sampler or some other clever way of making categorical moves. It's something I can look into, but I am not currently sure what is the best way to achieve this in numpyro.

@Jacob-Stevens-Haas
Copy link
Collaborator

Oh, sorry, yes I remember now. I meant to say horseshoe prior. I didn't mean to point you in a different direction.

@mikkelbue
Copy link
Collaborator Author

@Jacob-Stevens-Haas quick update. 2 new commits:

  1. Optimized the sampling of coefficients by vectorizing (seems obvious now, haha)
  2. Added docstring with type hints.

Still looking to update the notebook with the stuff you requested.

@mikkelbue
Copy link
Collaborator Author

@Jacob-Stevens-Haas Okay, I think this addresses all of your points. Please let me know if anything can be improved.

I'm not sure what to add in terms of tests. The notebook has a testing mode now so that should cover some of it.

@Jacob-Stevens-Haas
Copy link
Collaborator

Thank you! I'll go ahead and review it later this week. Before then, I'll figure out how to get it so your CI runs don't need manual approval. Sorry to keep you waiting.

For tests,

  1. look at test_optimizers for tests that are parametrized to all optimizers
  2. Think about which ValueErrors (bad arguments) you want to check for.
  3. Is there a simple very-low-dimensional (like a length 2, 3, or 4 vector) regression where it's easy to see how the math for horseshoe priors should work out? The three tests starting here are good examples.
  4. Are there any helper functions that you use that have a simple test? Here's one where we pulled out an indexing subroutine, and therefore could test it independently.

@Jacob-Stevens-Haas
Copy link
Collaborator

Ok I added you as a contributor with "write" permission to the repo. You may need to click an invite link in an email, but that should enable CI to run automatically on your pushes.

@mikkelbue
Copy link
Collaborator Author

@Jacob-Stevens-Haas Thank you!

The tests are failing when building the docs and I can't seem to figure out why. This is the trace:

sphinx.errors.SphinxWarning: autodoc: failed to import module 'sbr' from module 'pysindy.optimizers'; the following exception was raised:
No module named 'jax'

Warning, treated as error:
autodoc: failed to import module 'sbr' from module 'pysindy.optimizers'; the following exception was raised:
No module named 'jax'

I have already included jax as an optional dependency in the numpyro group, so I don't understand how this is different from the way cvxpy is set up for SINDyPI. But I must have missed something.

@Jacob-Stevens-Haas
Copy link
Collaborator

The github workflow file creates a new environment and installs the current package, including all specified optional dependencies. You'd need to change this line to add your optional dependencies to the list.

Copy link

codecov bot commented Jan 11, 2024

Codecov Report

Attention: 4 lines in your changes are missing coverage. Please review.

Comparison is base (9c73768) 94.40% compared to head (2060875) 94.40%.
Report is 1 commits behind head on master.

Files Patch % Lines
pysindy/__init__.py 50.00% 2 Missing ⚠️
pysindy/optimizers/__init__.py 50.00% 2 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##           master     #440      +/-   ##
==========================================
- Coverage   94.40%   94.40%   -0.01%     
==========================================
  Files          37       38       +1     
  Lines        3989     4060      +71     
==========================================
+ Hits         3766     3833      +67     
- Misses        223      227       +4     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@mikkelbue
Copy link
Collaborator Author

Ok I added you as a contributor with "write" permission to the repo. You may need to click an invite link in an email, but that should enable CI to run automatically on your pushes.

Ah, I just found the invitation. It was on the front page of this repo 🤔 .. thank you!

@Jacob-Stevens-Haas
Copy link
Collaborator

Hey sorry for slow responses on my end - I'm responsible for #451 and #459, which are both pretty complex. I'll finish #451 in a few days and then I'll give this a thorough review.

@mikkelbue
Copy link
Collaborator Author

test_pickle

This is now added by refactoring the existing test_pickle using @pytest.mark.parametrize.

Something that directly tests horsehoe prior (_sample_reg_horseshoe) in a very simple case

This is slightly problematic, since if called outside a model context, the numpyro.sample functions would need to take an additional argument key to produce a sample. I can adapt the function to optionally take this, but that feels a little roundabout just to get a unit test set up. What do you think?

Something that tests overall SBR in a very simple case

This is now done as test_sbr_fit. It just asserts whether SBR has an mcmc_ attribute after fitting and whether it contains samples of the correct variables.

@Jacob-Stevens-Haas
Copy link
Collaborator

I can adapt the function to optionally take this, but that feels a little roundabout just to get a unit test set up. What do you think?

No, that makes sense. I always wonder about what the right way to test things things is (and thereby, understand), but having read more about reg horseshe, I agree the roundabout work is not necessary here. I read a bit more about the reg-horseshoe, so I'll add some more info in the docstrings back in in a suggested edit.

This is now done as test_sbr_fit. It just asserts whether SBR has an mcmc_ attribute after fitting and whether it contains samples of the correct variables.

I meant even simpler setup, but more rigorous test, a la test_sparse_subset. Try this one below:

def test_sbr_fit():
    x = np.eye(2)
    y = np.array([[1], [1e-4]])
    expected = np.array([[1], [0]])
    result = SBR(num_warmup=10, num_samples=10).fit(x, y).coef_
    np.testing.assert_allclose(result, expected, atol=1e-1)

This is a good example of what it takes to get SBR to succeed in the simplest of cases, and I'm looking into why it's failing.

Copy link
Collaborator

@Jacob-Stevens-Haas Jacob-Stevens-Haas left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All suggested docstring changes based on looking into parameters to get the test case to fit.

pysindy/optimizers/sbr.py Outdated Show resolved Hide resolved
pysindy/optimizers/sbr.py Outdated Show resolved Hide resolved
pysindy/optimizers/sbr.py Outdated Show resolved Hide resolved
pysindy/optimizers/sbr.py Outdated Show resolved Hide resolved
@Jacob-Stevens-Haas
Copy link
Collaborator

Jacob-Stevens-Haas commented Feb 8, 2024

Ok, I've got it but I have to run, will post it a bit later. x and y in my code need more samples - even dropping the noise prior near zero doesn't do it.
There were two problems with my test, and I had only solved one. The first problem was recovering a reasonable answer to a simple regression problem, and that required more samples. The second problem was trying to tune the parameters to demonstrate some shrinkage. But I can't figure out a way to tune this regression to eliminate a parameter (that's why I spent a lot of time on understanding the init parameters this past week). The best I can get is fitting coefficients with no shrinkage:

test_sbr_accurate()
    # It's really hard to tune SBR to get desired shrinkage
    # This just tests that SBR fits "close" to unregularized regression
    x = np.tile(np.eye(2), 4).reshape((-1, 2))
    y = np.tile([[1], [1e-1]], 4).reshape((-1, 1))
    opt = SBR(num_warmup=50,num_samples=50).fit(x, y)
    result = opt.coef_
    unregularized = np.array([[1, 1e-1]])
    np.testing.assert_allclose(result, unregularized, atol=1e-3)
    assert hasattr(opt, "mcmc_")

That's probably good enough for now. In an ideal world, we could have a test that kind of recreated canonical "horeshoe" in Figure 2 of Handling Sparsity via the Horseshoe or Figure 1 of Sparsity Information Regularization in the Horseshoe and other Shrinkage Priors.

@Jacob-Stevens-Haas
Copy link
Collaborator

Jacob-Stevens-Haas commented Feb 11, 2024

So that's all for testing. For the notebooks,

  1. Combine them (just one notebook file) as example.ipynb
  2. Save as a python file
  3. cd examples, then python publish_notebook.py 19_bayesian_sindy.
  4. Run pytest --durations=10 --external_notebook 19_bayesian_sindy to see if it tests
    (or you can run pytest --durations=10 -m "notebook" -k test_notebook and see if it gets picked up)
  5. set constants that determine how long the test takes with if __name__ == "testing": <short value> else: <full value> in order to speed up the test. num_samples and num_warmup are good candidates, as are anything to control the length of the simulation.

mikkelbue and others added 5 commits February 12, 2024 08:56
Co-authored-by: Jacob Stevens-Haas <37048747+Jacob-Stevens-Haas@users.noreply.github.com>
Co-authored-by: Jacob Stevens-Haas <37048747+Jacob-Stevens-Haas@users.noreply.github.com>
Co-authored-by: Jacob Stevens-Haas <37048747+Jacob-Stevens-Haas@users.noreply.github.com>
Co-authored-by: Jacob Stevens-Haas <37048747+Jacob-Stevens-Haas@users.noreply.github.com>
@mikkelbue
Copy link
Collaborator Author

@Jacob-Stevens-Haas Thank you for keeping up with this.

The first problem was recovering a reasonable answer to a simple regression problem, and that required more samples.

This is why I avoided comparing to "theoretical" values for the test I originally wrote. MCMC will always require some samples to get a sensible answer, and I suspect that this test with 50 warmup and 50 samples is not yet a converged MCMC. But if this works, that is great.

But I can't figure out a way to tune this regression to eliminate a parameter (that's why I spent a lot of time on understanding the init parameters this past week).

Shrinkage does work as expected when running more elaborate examples, but parameters will hardly ever be exactly 0, since the ._coef are the means of the MCMC samples. We could set a clipping threshold as in e.g. STLSQ, if we wanted to zap the small coefficient completely.

I will have a look at converting the example to a script.

@mikkelbue
Copy link
Collaborator Author

@Jacob-Stevens-Haas

So that's all for testing. For the notebooks, ...

All done. However pytest --durations=10 --external_notebook 19_bayesian_sindy fails with pytest: error: unrecognized arguments: --external_notebook and I can't find any documentation for that argument.

@Jacob-Stevens-Haas
Copy link
Collaborator

Jacob-Stevens-Haas commented Feb 20, 2024

LGTM, and thank you for your dedication on this! I'll look into that pytest issue separately - pytest allows you create command line arguments in conftest.py, which is what I did for --external-notebook. It existed before the split to example.py/example.ipynb, so probably not necessary anymore.

Looks like example 19 runs in 8.5 seconds under test which is great :)

@Jacob-Stevens-Haas Jacob-Stevens-Haas merged commit a0e36dd into dynamicslab:master Feb 20, 2024
4 of 6 checks passed
jpcurbelo pushed a commit to jpcurbelo/pysindy_fork that referenced this pull request Apr 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants