Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Testing: test_models with dynamic loading #213

Open
caglorithm opened this issue Jun 20, 2022 · 1 comment
Open

Testing: test_models with dynamic loading #213

caglorithm opened this issue Jun 20, 2022 · 1 comment
Assignees
Labels
testing Automated testing

Comments

@caglorithm
Copy link
Member

caglorithm commented Jun 20, 2022

Right now, we have a list of all models to test and have a function for each of them. This is kind of ugly.

Therefore, I think we should have some method of dynamically getting a list of all models and importing them programmatically, and subsequently, testing them one by one.

Some code that can be useful for this:

import glob
import importlib
import sys
import inspect

models = [p.split("/")[-1] for p in glob.glob("../neurolib/models/*") if not (p.split("/")[-1].startswith("_") or p.split("/")[-1].endswith(".py"))]
models.sort()
for model in models:
    i = importlib.import_module(f".models.{model}", "neurolib")
    classname = inspect.getmembers(i)[0][0]
    importname = f"neurolib.models.{model}"
    desc = inspect.getdoc(inspect.getmembers(i)[0][1]).split("\n")[0]
    print(f"Model: {model}\tClass: {classname}\n\t\tDescr: {desc}")
    print(f"\t\tImport: from {importname} import {classname}")

Outputs:

Model: aln	Class: ALNModel
		Descr: Multi-population mean-field model with exciatory and inhibitory neurons per population.
		Import: from neurolib.models.aln import ALNModel
Model: bold	Class: BOLDModel
		Descr: Balloon-Windkessel BOLD simulator class.
		Import: from neurolib.models.bold import BOLDModel
Model: fhn	Class: FHNModel
		Descr: Fitz-Hugh Nagumo oscillator.
		Import: from neurolib.models.fhn import FHNModel
Model: hopf	Class: HopfModel
		Descr: Stuart-Landau model with Hopf bifurcation.
		Import: from neurolib.models.hopf import HopfModel
Model: multimodel	Class: ALNNetwork
		Descr: Whole brain network of adaptive exponential integrate-and-fire mean-field
		Import: from neurolib.models.multimodel import ALNNetwork
Model: thalamus	Class: ThalamicMassModel
		Descr: Two population thalamic model
		Import: from neurolib.models.thalamus import ThalamicMassModel
Model: wc	Class: WCModel
		Descr: The two-population Wilson-Cowan model
		Import: from neurolib.models.wc import WCModel
Model: ww	Class: WWModel
		Descr: Wong-Wang model. Original version and reduced version.
		Import: from neurolib.models.ww import WWModel
@caglorithm caglorithm added enhancement Enhancing neurolib. testing Automated testing and removed enhancement Enhancing neurolib. labels Jun 20, 2022
@jajcayn
Copy link
Collaborator

jajcayn commented Feb 20, 2023

Btw, personally, I do not think this is necessary. If we want to simplify testing, I would rather do something like this:

class NativeTestCase(unittest.TestCase):
    model = None

    def test_single_node(self):
        # this will skip running test for this parent class
        if self.model is None:
            return
        self.model.run()
        ...
    def test_network(self):
        if self.model is None:
            return
        self.model.run()
        ...

class TestAln(NativeTestCase):
    model = ALNModel()

class TestHopf(NativeTestCase):
    model = HopfModel()

    def test_specific(self):
        # optional specific test for this model
        return 42
        

...

Yes, we still need to add tests for each (new) model manually, but all the logic is in one class, so adding a test is a matter of two lines... The same logic can be done for MultiModel.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
testing Automated testing
Projects
None yet
Development

No branches or pull requests

2 participants