Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create test cases to run examples in tutorials/pytorch tutorials/sklearn #1085

Open
2 tasks
srcansiz opened this issue Apr 8, 2024 · 3 comments
Open
2 tasks
Assignees
Labels
done issue is completed, it meets the DoD and was merged to the next release integration branch sprint backlog the development team adds an entry to the sprint backlog

Comments

@srcansiz
Copy link
Member

srcansiz commented Apr 8, 2024

  • Create a separated (that can be executed independently from regular end2end test) script to execute and test notebooks in the documentation.
  • This issue is only writing test cases for tutorials/pytorch and tutorials/sklearn

IMPORTANT:

There are some notebook that requires big datasets such as celaba. Those notebooks can be skipped if that set is not existing in the local.

@srcansiz srcansiz added sprint backlog the development team adds an entry to the sprint backlog to do issue not started yet (but intention to start soon) labels Apr 8, 2024
@srcansiz srcansiz changed the title Create sperate test script to run examples in documentation Create test cases to run examples in documentation Apr 11, 2024
@srcansiz
Copy link
Member Author

Please find the details of this issue

  • Create end2end test scripts like the ones in test/end2end/e2e_*.py
  • In the e2e test script create pytests fixtures to add components and add datasets. This already implemented in other e2e files.
  • Write testing function that executes tutorial notebooks. Test should only verify if notebook is executable.

Suggestion:
You can use same fixture (nodes) to add other datasets without recreating the nodes:

@pytest.fixture(scope="module")
def setup(request):
    # Creates nodes and starts 
   return node_1, node_2,
   
def test_execute_pytorch_mnist_notebook(setup):
    node_1, node_2, node_3 = setup
    dataset = { this is the dict that defines the dataset (dataset json)}
    
    add_dataset(node_1, dataset)
    add_dataset(node_2, dataset)
    add_dataset(node_3, dataset)
     
    try: 
         execute_script('...noteboks....')
    except: 
         pytest.fail('Opps')

Note: All notebooks can be executed in one single test file.

@srcansiz srcansiz changed the title Create test cases to run examples in documentation Create test cases to run examples in tutorials/pytorch tutorials/sklearn Apr 11, 2024
@srcansiz
Copy link
Member Author

Notebooks: pytorch used cars example and sklearn perceptron example contains dataset preparation in the beginning of the notebook. Those notebooks may require modification to make it compatible to e2e test machinery. Therefore, we can skip those in the scope of this issue.

@ybouilla ybouilla added doing issue implementation in progress and removed to do issue not started yet (but intention to start soon) labels Apr 11, 2024
@ybouilla ybouilla self-assigned this Apr 11, 2024
@ybouilla
Copy link
Contributor

Notebooks: pytorch used cars example and sklearn perceptron example contains dataset preparation in the beginning of the notebook. Those notebooks may require modification to make it compatible to e2e test machinery. Therefore, we can skip those in the scope of this issue.

This will be handled in a separate issue

@ybouilla ybouilla added done issue is completed, it meets the DoD and was merged to the next release integration branch and removed doing issue implementation in progress labels May 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
done issue is completed, it meets the DoD and was merged to the next release integration branch sprint backlog the development team adds an entry to the sprint backlog
Projects
None yet
Development

No branches or pull requests

2 participants