You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Should travis test the (getting_started) example notebook(s)? It'd be good to know that they always run, but due to the multinest normlization, they take a while to calculate.
The text was updated successfully, but these errors were encountered:
It might also be useful to run the notebook each time the documentation is build, see https://nbconvert.readthedocs.io/en/latest/execute_api.html.
I see some pros and cons: If we execute the notebook each time, it know that it runs. Nice. However, it might be that it creates unexpected results, and this is not automatically checked for. A new user would then try to follow the example notebook and assume that the results shown there are the correct results.
Probably there is an option to check that the metadata of the notebook did not change. Thus one is "forced" to run the example before uploading it and will be more likely to see/catch major changes.
Should travis test the (getting_started) example notebook(s)? It'd be good to know that they always run, but due to the multinest normlization, they take a while to calculate.
The text was updated successfully, but these errors were encountered: