Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DOC improve organisation #1147

Open
wants to merge 5 commits into
base: main
Choose a base branch
from
Open

Conversation

tomMoral
Copy link
Contributor

@tomMoral tomMoral commented Apr 22, 2024

Related to doc improvements described in #1008

Copy link

codecov bot commented Apr 22, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 74.24%. Comparing base (0b5f931) to head (867355a).
Report is 8 commits behind head on main.

Additional details and impacted files
@@             Coverage Diff             @@
##             main    #1147       +/-   ##
===========================================
- Coverage   85.13%   74.24%   -10.89%     
===========================================
  Files          90       90               
  Lines        6651     6931      +280     
===========================================
- Hits         5662     5146      -516     
- Misses        989     1785      +796     
Flag Coverage Δ
unittests 74.24% <ø> (-10.89%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

see 49 files with indirect coverage changes

@janfb
Copy link
Contributor

janfb commented May 7, 2024

Note: for building the docs site locally I needed to run the following commands in an empty condo env with python=3.10:

pip install -e ".[doc,dev]"
pip install jupyter_contrib_nbextensions
pip install --upgrade notebook==6.4.12
pip uninstall traitlets 
pip install traitlets==5.9.0
pip uninstall ipython
pip install ipython==8.9.0

...

mkdocs serve

I believe this is due to our custom Jupyter notebook extensions (e.g., jupyter_contrib_nbextensions) not being compatible with the new notebook 7 release from last year. As a consequence, we need to downgrade notebook, traitlets and ipython.

Copy link
Contributor

@janfb janfb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is great, thanks a lot!

I added some comments and pushed 2 commits implementing some of my suggestions.

Comment on lines -219 to -225

Note that the tutorials and examples are initially written in jupyter notebooks
and then converted to markdown programatically. To do so locally, you should run
```
jupyter nbconvert --to markdown ../tutorials/*.ipynb --output-dir docs/tutorial/
jupyter nbconvert --to markdown ../examples/*.ipynb --output-dir docs/examples/
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this now happening automatically when running mkdocs serve?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, with the mkdocs-jupyter extension.
However, this makes the rendering a bit slower with mkdocs serve as this is run each time you do an update on any file. I will see if it is possible to cache this.

Copy link
Contributor

@janfb janfb May 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really like the update for the landing page!
I will make a separate commit with some refactoring. Happy to get your comments on those changes as well!

Comment on lines +8 to +10
- Getting started: tutorials/00_getting_started_flexible.ipynb
- Amortized inference: tutorials/01_gaussian_amortized.ipynb
- Implemented algorithms: tutorials/16_implemented_methods.md
Copy link
Contributor

@janfb janfb May 7, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do some notebooks have the notebook file ending, and others .md? I guess because the conversion is happening automatically now?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes I am not sure either which one is the right one, I will check this.

Comment on lines +104 to +108
```python
inference = SNPE(prior=prior)
_ = inference.append_simulations(theta, x).train()
posterior = inference.build_posterior()
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure we still need this when we replace the GIF above with actual code that looks very similar to this one here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this showcase the two different API so I would keep it.
But it is true it is more redundant.

Comment on lines 6 to 18
```python
# simulation
theta = prior.sample((1000,))
x = simulator(theta)

# training
inference = SNPE(prior).append_simulations(theta, x)
inference.train()

# inference
posterior = inference.build_posterior()
posterior_samples = posterior.sample((1000,), x=x_o)
```
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it's time to remove the GIF. It just changes to fast and I believe it's not really helpful. I suggest to replace it with an actual code snippet.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the idea of an actual code snipet! I think this helps onboarding a lot.
I suggest to add a dummy prior and simulator so this code is actually runnable, it helps a lot understanding the base concept of the toolkit.

Copy link
Contributor Author

@tomMoral tomMoral left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A few more comment, I will take another look at the mkdocs-jupyter extension in the afternoon

Comment on lines 6 to 18
```python
# simulation
theta = prior.sample((1000,))
x = simulator(theta)

# training
inference = SNPE(prior).append_simulations(theta, x)
inference.train()

# inference
posterior = inference.build_posterior()
posterior_samples = posterior.sample((1000,), x=x_o)
```
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the idea of an actual code snipet! I think this helps onboarding a lot.
I suggest to add a dummy prior and simulator so this code is actually runnable, it helps a lot understanding the base concept of the toolkit.

Comment on lines 7 to 17
# simulation
theta = prior.sample((1000,))
x = simulator(theta)

# training
inference = SNPE(prior).append_simulations(theta, x)
inference.train()

# inference
posterior = inference.build_posterior()
posterior_samples = posterior.sample((1000,), x=x_o)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# simulation
theta = prior.sample((1000,))
x = simulator(theta)
# training
inference = SNPE(prior).append_simulations(theta, x)
inference.train()
# inference
posterior = inference.build_posterior()
posterior_samples = posterior.sample((1000,), x=x_o)
import torch
from sbi.inference import SNPE
from sbi.utils import BoxUniform
#define prior and simulator
prior = BoxUniform(torch.tensor([0]), torch.tensor([1]))
simulator = lambda theta: theta + 1
# simulation
theta = prior.sample((1000,))
x = simulator(theta)
# training
inference = SNPE(prior).append_simulations(theta, x)
inference.train()
# inference
x_o = torch.tensor([1.5])
posterior = inference.build_posterior()
posterior_samples = posterior.sample((1000,), x=x_o)

Comment on lines +104 to +108
```python
inference = SNPE(prior=prior)
_ = inference.append_simulations(theta, x).train()
posterior = inference.build_posterior()
```
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this showcase the two different API so I would keep it.
But it is true it is more redundant.


- [Inference](inference.md)
<br/>
*XXX*
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was not sure of what to put here, if you have suggestion that would be perfect.

Comment on lines +8 to +10
- Getting started: tutorials/00_getting_started_flexible.ipynb
- Amortized inference: tutorials/01_gaussian_amortized.ipynb
- Implemented algorithms: tutorials/16_implemented_methods.md
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes I am not sure either which one is the right one, I will check this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants