Skip to content

Releases: INM-6/networkunit

Networkunit 0.2.2

04 Jan 08:15
3a0e84e
Compare
Choose a tag to compare

Patch release updating the package dependencies, the example notebooks, and fixing a few minor bugs in the model classes.

NetworkUnit 0.2.1

21 Dec 10:53
773dda4
Compare
Choose a tag to compare

Patch release to update the package dependencies. Also, this release adopts the pyproject.toml convention for the package description.

NetworkUnit 0.2.0

29 Sep 11:32
9ebd2f5
Compare
Choose a tag to compare
  • parameter handling
    • generate_prediction() and other custom class function no longer take optional extra parameter as arguments, but only use self.params
    • no class function should accept arguments that override class parameters
    • default_params test class attribute are inherited by using default_params = {**parent.default_params, 'new_param':0}
  • caching
    • improved caching of intermediate test- and simulation results, e.g. for the correlation matrix
    • improving backend definitions
  • parallelization
    • automatic parallelization for loops over spiketrains or lists of spiketrains. To use set params['parallel executor'] to ProcessPoolExecutor(), MPIPoolExecutor(), or MPICommExecutor() (see documentation in Elephant package)
  • various bug fixes
  • new features
    • adding the joint_test class that enables the combination of multiple neuron-wise tests for multidimensional testing with the Wasserstein score
  • new test classes
    • joint_test
    • power_spectrum_test
      • freqband_power_test
    • timescale_test
    • avg_std_correlation_test
  • new score classes

NetworkUnit 0.1.2

15 Apr 17:36
Compare
Choose a tag to compare

This patch contains:

  • a fix for an issue where the setup script was failing to properly install the backend directory (see issue #20)

NetworkUnit 0.1.1

30 Aug 08:52
Compare
Choose a tag to compare

This patch contains:

  • a new backend class, which handles the storage of generated predictions in memory or on disk. To make use of it just set backend='storage' in the model instantiation. By default predictions are stored in memory. To change that set model.get_backend().use_disk_cache = True and model.get_backend().use_memory_cache = False .

  • various bug fixes

  • updated requirements.txt and environment.yaml