Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Test for visualisations #147

Open
wingedRuslan opened this issue Aug 6, 2019 · 1 comment
Open

Test for visualisations #147

wingedRuslan opened this issue Aug 6, 2019 · 1 comment

Comments

@wingedRuslan
Copy link
Collaborator

Heya,

I would like to note here the way how to test visualizations.

After some googling (1), (2), I've decided that a good solution would be to use py.test plugin to validate Jupyter notebooks.

The plugin adds functionality to py.test to recognise and collect Jupyter notebooks. The intended purpose of the tests is to determine whether execution of the stored inputs match the stored outputs of the .ipynb file. Whilst also ensuring that the notebooks are running without errors.

The tests were designed to ensure that Jupyter notebooks (especially those for reference and documentation), are executing consistently.

Comparing cell-output while testing with the stored one in a notebook will cause all cells to fail because the output information contains the memory address where the figure is stored (obviously, this is the unique value so it can't be compared).

But we can make sure that the notebooks (with visualisations) are running without errors.

The only drawback I see is adding a new package-requirement - nbval.

The alternative solution to consider is mentioned here. The idea is to add the following test

Alternative test
import subprocess
import tempfile


def _exec_notebook(path):
    with tempfile.NamedTemporaryFile(suffix=".ipynb") as fout:
        args = ["jupyter", "nbconvert", "--to", "notebook", "--execute",
                "--ExecutePreprocessor.timeout=1000",
                "--output", fout.name, path]
        subprocess.check_call(args)


def test():
    _exec_notebook('visualisations_tutorial.ipynb')

I prefer the 1st solution - use the py.test extension.

By the way, do we care about the time needed for Travis to run tests? I mean, in both cases, I will create a jupyter notebook with visualisations, but I can not figure out the "length" of this notebook. Should I include all the possible ways how to call any visualisation function? Or including a few different calls (around 5-7) of every function would be sufficient?

Sources:

  1. https://discourse.jupyter.org/t/testing-notebooks/701/10
  2. http://www.blog.pythonlibrary.org/2018/10/16/testing-jupyter-notebooks/
@wingedRuslan
Copy link
Collaborator Author

Kirstie suggestion during the meeting:

Here's a project that does the same thing - maybe you can do the same thing? bids-standard/pybids#461

Actually, that's the same solution as the alternative solution I mentioned in the issue.

It is agreed to use the 1st solution for scona. Some discussions about the pros and cons.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant