Skip to content

Releases: lava-nc/lava

Lava 0.9.0

15 Nov 05:37
Compare
Choose a tag to compare

Lava v0.9.0 Release Notes

November 9, 2023

What's Changed

New Features and Improvements

  • VarWire process added in lava.proc.io.injector. It works similarly to Injector, but using RefPorts.
  • Added watchdog which supports monitoring a port to observe if it is blocked
  • GradedVec process enabling graded spike vector layer, which transmits accumulated input as graded spike with no dynamics
  • ProdNeuron process, enabling the product of two graded inputs and outputs result as graded spike
  • RFZero process, enabling resonate and fire neuron with spike trigger of threshold and 0-phase crossing
  • Added BitCheck process, allowing quick check of hardware overflow for bit-accurate process vars
  • Added support for multi-instance compilation through compile_option {'folded_view': ['templateName']}

Bug Fixes and Other Changes

  • Fixed buffer issue in synaptic delay.
  • Added support for numpy array types to use as input weights for Sparse connection process.

Breaking Changes

  • No known breaking changes in this release.

Known Issues

  • No known issues in this release.

New Contributors

Full Changelog: v0.8.0...v0.9.0

Lava 0.8.0

25 Jul 07:12
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.7.0...v0.8.0

Lava 0.7.0

22 Apr 07:49
82a044a
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v0.6.0...v0.7.0

Lava 0.6.0

14 Dec 23:46
7958106
Compare
Choose a tag to compare

Lava v0.6.0 Release Notes

December 14, 2022

New Features and Improvements

  • Enabled 2 factor learning on Loihi 2 and in Lava simulation with the LearningLIF and LearningLIFFloat processes. (PR #528 & PR #535)
  • Resonate and Fire and Resonate and Fire Izhikevich neurons now available in Lava simulation. (PR #378)
  • New tutorial on sigma-delta networks in Lava. (PR #470)
  • Enabled state probes for Loihi 2 and added an in-depth tutorial (lava-loihi extension).

Bug Fixes and Other Changes

  • RF neurons with variable periods now work. (PR #487)
  • Automatically cancle older CI runs of a PR if a newer one was started due to a push. (PR #488)
  • Improved learning API, related tutorials and tests and a but in the Loihi STDP implementation. (PR #500)
  • Generalisation of the pre- and post hooks into the runtime service. (PR #521)
  • Improved RSTDP learning tutorial. (PR #536)

Breaking Changes

  • No breaking changes in this release.

Known Issues

  • Direct channel connections between Processes using a PyProcessModel and NcProcessModel are not supported.
  • Channel communication between PyProcessModels is slow.
  • Lava networks throw error if run is invoked too many times due to a leak in shared memory descriptors in CPython implementation.
  • Virtual ports are only supported between Processes using PyProcModels, and between Processes using NcProcModels. Virtual ports are not supported when Processes with CProcModels are involved or between pairs of Processes that have different types of ProcModels. In addition, VirtualPorts do not support concatenation yet.
  • Joining and forking of virtual ports is not supported.
  • The Monitor Process only supports probing a single Var per Process implemented via a PyProcessModel. Probing states on Loihi 2 is currently available using StateProbes (tutorial available in lava-loihi extension).
  • Some modules, classes, or functions lack proper docstrings and type annotations. Please raise an issue on the GitHub issue tracker in such a case.

Thanks to our Contributors

Full Changelog: v0.5.1...v0.6.0

Lava 0.5.1

31 Oct 10:44
d321a8b
Compare
Choose a tag to compare

Lava v0.5.1 Release Notes

October 31, 2022

New Features and Improvements

  • Lava now supports LIF reset models with CPU backend. (PR #415)
  • LAVA now supports three factor learning rules. This release introduces a base class for plastic neurons as well as differentiation between Loihi2FLearningRule and Loihi3FLearningRule. (PR #400)
  • New Tutorial shows how to implement and use a three-factor learning rule in Lava with an example of reward-modulated STDP. (PR #400)

Bug Fixes and Other Changes

  • Fixes a bug in network compilation for branching/forking of CProcess and NC Process Models. (PR #391)
  • Fixes a bug to support multiple CPorts to PyPorts connectivity in a single process model. (PR #391)
  • Fixed issues with the uk conditional in the learning engine. (PR #400)
  • Fixed the explicit ordering of subcompilers in compilation stack: C-first-Nc-second heuristic. (PR #408)
  • Fixed the incorrect use of np.logical_and and np.logical_or discovered in learning-related code in Connection ProcessModels. (PR #412)
  • Fixed a warning in Compiler process model discovery and selection due to importing sub process model classes. (PR #418)
  • Fixed a bug in Compiler to select correct CProcessModel based on tag specified in run config. (PR #421)
  • Disabled overwriting of user set environment variables in systems.Loihi2. (PR #428)
  • Process Model selection now works in Jupyter Collab environment. #435
  • Added instructions to download dataset for MNIST tutorial (PR #439)
  • Fixed a bug in run config with respect to initializing pre- and post-execution hooks during multiple runs (PR #440)
  • Added an interface for Lava profiler to enable future implementations on different hardware or chip generations. (PR #444)
  • Updated PyTest and NBConvert dependencies to newer versions in poetry for installation. (PR #447)

Breaking Changes

  • QUBO related processes and process models have now moved to lava-optimization (PR #449)

Known Issues

  • Direct channel connections between Processes using a PyProcessModel and NcProcessModel are not supported.
  • Channel communication between PyProcessModels is slow.
  • Lava networks throw errors if run is invoked too many times due to a leak in shared memory descriptors in CPython implementation.
  • Virtual ports are only supported between Processes using PyProcModels, and between Processes using NcProcModels. Virtual ports are not supported when Processes with CProcModels are involved or between pairs of Processes that have different types of ProcModels. In addition, VirtualPorts do not support concatenation yet.
  • Joining and forking of virtual ports is not supported.
  • The Monitor Process only supports probing a single Var per Process implemented via a PyProcessModel. The Monitor Process does not support probing Vars on Loihi NeuroCores.
  • Some modules, classes, or functions lack proper docstrings and type annotations. Please raise an issue on the GitHub issue tracker in such a case.

Thanks to our Contributors

Lava 0.5.0

29 Sep 00:51
a26416b
Compare
Choose a tag to compare

The release of Lava v0.5.0 includes major updates to the Lava Deep Learning (Lava-DL) and Lava Optimization (Lava-Optim) libraries and offers the first update to the core Lava framework following the first release of the Lava extension for Loihi in July 2022.

  • Lava offers a new learning API on CPU based on the Loihi on-chip learning engine. In addition, various functional and performance issues have been fixed since the last release.
  • Several high-level application tutorials on QUBO (maximum independent set), deep learning (PilotNet, Oxford Radcliffe spike training), 2-factor STDP-based learning, and design of an E/I network model as well as a comprehensive API reference documentation make this version more accessible to new and experienced users.

New Features and Improvements

  • Added support for convolutional neural networks (lava-nc PR #344, lava-loihi PR #343).
    • Added NcL2ModelConv ProcessModel supporting Loihi 2 convolutional connection sharing (lava-loihi PR #343).
    • Added NcL1ModelConvAsSparse ProcessModel supporting convolutional connections on implemented as sparse connections (Compatible with both Loihi 1 and Loihi 2).
    • Added ability to represent convolution inferred connection to represent shared connection to and from Loihi 2 convolution synapse (lava-loihi PR #343).
    • Added Convolution Manager to manage the resource allocation for utilizing Loihi 2 convolution feature (lava-loihi PR #343).
    • Added convolution connection strategy to partition convolution layers to Loihi2 neurocores (lava-loihi PR #343).
    • Added support for convolution spike generation (lava-loihi PR #343).
    • Added support for Convolution specific varmodels (ConvNeuronVarModel and ConvInVarModel) for interacting with the Loihi 2 convolution configured neuron as well as Loihi 2 convolution input from a C process.
    • Added embedded IO processes and C-models to bridge the interaction between Python and Loihi 2 processes in the form of spikes as well as state read/write including convolution specific support. (lava-nc PR #344, lava-loihi PR #343)
    • Added support for compressed message passing from Python to Loihi 2 using Loihi 2’s embedded processors (lava-nc PR #344, lava-loihi PR #343).
  • Added support for resource cost sharing between Loihi 2 to allow for flexible memory allocation in neurocore (lava-loihi PR #343).
  • Added support for sharing axon instructions for output spike generation from a Loihi 2 neurocore (lava-loihi PR #287).
    • Added support for learning in simulation (CPU) according to Loihi’s learning engine (PR #332):
    • STDPLoihi class is a 2-Factor STDP learning algorithm added to the Lava Process Library based on the Loihi learning engine.
    • LoihiLearningRule class provides the ability to create custom learning rules based on the Loihi learning engine.
    • Implemented a LearningDense Process which takes the same arguments as Dense, plus an optional LearningRule argument to enable learning in its ProcessModels.
    • Implemented floating-point and bit-approximate PyLoihi ProcessModel, named PyLearningDenseModelFloat and PyLearningDenseModelBitApproximate, respectively.
    • Also implemented bit-accurate PyLoihi ProcessModel named PyLearningDenseModelBitAcc.
    • Added a tutorial to show the usage of STDPLoihi and how to create custom learning rules.

Bug Fixes and Other Changes

  • The fixed-point PyProcessModel of the Dense Process now has the same behavior as the NcProcessModel for Loihi 2 (PR #328)
  • The Dense NcProcModel now correctly represents purely inhibitory weight matrices on Loihi 2 (PR #376).
  • The neuron current overflow behavior of the fixed point LIF model was fixed so that neuron current wraps to opposite side of integer range rather than to 0. (PR #364)

Breaking Changes

  • Function signatures of node allocate() methods in Net-API have been updated to use explicit arguments. In addition, some function argument names have been changed to abstract away Loihi register details.
  • Removed bit-level parameters and Vars from Dense Process API.

Known Issues

  • Only one instance of a Process targeting an embedded processor (using CProcessModel) can currently be created. Creating multiple instances in a network, results in an error. As a workaround, the behavior of multiple Processes can be fused into a single CProcessModel.
  • Direct channel connections between Processes using a PyProcessModel and NcProcessModel are not supported.
  • Channel communication between PyProcessModels is slow.
  • The Lava Compiler is still inefficient and in need of improvement to performance and memory utilization.
  • Virtual ports are only supported between Processes using PyProcModels, and between Processes using NcProcModels. Virtual ports are not supported when Processes with CProcModels are involved or between pairs of Processes that have different types of ProcModels. In addition, VirtualPorts do not support concatenation yet.
  • Joining and forking of virtual ports is not supported.
  • The Monitor Process does currently only support probing of a single Var per Process implemented via a PyProcessModel. The Monitor Process does currently not support probing of Vars mapped to NeuroCores.
  • Some modules, classes, or functions lack proper docustrings and type annotations. Please raise an issue on the GitHub issue tracker in such a case.
  • Learning API does not support 3-Factor learning rules yet.

Thanks to our Contributors

  • Intel Corporation: All contributing members of the Intel Neuromorphic Computing Lab

What's Changed

New Contributors

Read more

Lava 0.4.0

13 Jul 17:46
19132bb
Compare
Choose a tag to compare

The release of Lava v0.4.0 brings initial support to compile and run models on Loihi 2 via Intel’s cloud hosted Oheo Gulch and Kapoho Point systems. In addition, new tutorials and documentation explain how to build Lava Processes written in Python or C for CPU and Loihi backends.

While this release offers few high-level application examples, Lava v0.4.0 provides major enhancements to the overall Lava architecture. It forms the basis for the open-source community to enable the full Loihi feature set, such as on-chip learning, convolutional connectivity, or accelerated spike IO. The Lava Compiler and Runtime architecture has also been generalized allowing extension to other backends or neuromorphic processors. Subsequent releases will improve compiler performance and provide more in-depth documentation as well as several high-level coding examples for Loihi, such as real-world applications spanning multiple chips.

The public Lava GitHub repository (https://github.com/lava-nc/lava) continues to provide all the features necessary to run Lava applications on a CPU backend. In addition, it now also includes enhancements to enable Intel Loihi support. To run Lava applications on Loihi, users need to install the proprietary Lava extension for Loihi. This extension contains the Loihi-compatible Compiler and Runtime features as well as additional tutorials. While this extension is currently released as a tar file, it will be made available as a private GitHub repo in the future.
Please help us fix any problems you encounter with the release by filing an issue on Github for the public code or sending a ticket to the team for the Lava extension for Loihi.

New Features and Improvements

Features marked with * are available as part of the Loihi 2 extension available to INRC members.

  • *Extended Process library including new ProcessModels and additional improvements:
    • LIF, Sigma-Delta, and Dense Processes execute on Loihi NeuroCores.
    • Prototype Convolutional Process added.
    • Sending and receiving spikes to NeuroCores via embedded processes that can be programmed in C with examples included.
    • All Lava Processes now list all constructor arguments explicitly with type annotations.
  • *Added high-level API to develop custom ProcessModels that use Loihi 2 features:
    • Loihi NeuroCores can be programmed in Python by allocating neural network resources like Axons, Synapses or Neurons. In particular, Loihi 2 NeuroCore Neurons can be configured by writing highly flexible assembly programs.
    • Loihi embedded processors can be programmed in C. But unlike the prior NxSDK, no knowledge of low-level registers details is required anymore. Instead, the C API mirrors the high-level Python API to interact with other processes via channels.
  • Compiler and Runtime support for Loihi 2:
    • General redesign of Compiler and Runtime architecture to support compilation of Processes that execute across a heterogenous backend of different compute resources. CPU and Loihi are supported via separate sub compilers.
    • *The Loihi NeuroCore sub compiler automatically distributes neural network resources across multiple cores.
    • *The Runtime supports direct channel-based communication between Processes running on Loihi NeuroCores, embedded CPUs or host CPUs written in Python or C. Of all combinations, only Python<->C and C<->NeuroCore are currently supported.
    • *Added support to access Process Variables on Loihi NeuroCores at runtime via Var.set and Var.get().
  • New tutorials and improved class and method docstrings explain how new Lava features can be used such as *NeuroCore and *embedded processor programming.
  • An extended suite of unit tests and new *integration tests validate the correctness of the Lava framework.

Bug Fixes and Other Changes

  • Support for virtual ports on multiple incoming connections (Python Processes only) (Issue #223, PR #224)
  • Added conda install instructions (PR #225)
  • Var.set/get() works when RunContinuous RunMode is used (Issue #255, PR #256)
  • Successful execution of tutorials now covered by unit tests (Issue #243, PR #244)
  • Fixed PYTHONPATH in tutorial_01 (Issue #45, PR #239)
  • Fixed output of tutorial_07 (Issue #249, PR #253)

Breaking Changes

  • Process constructors for standard library processes now require explicit keyword/value pairs and do not accept arbitrary input arguments via **kwargs anymore. This might break some workloads.
  • use_graded_spike kwarg has been changed to num_message_bits for all the built-in processes.
  • shape kwarg has been removed from Dense process. It is automatically inferred from the weight parameter’s shape.
  • Conv Process has additional arguments weight_exp and num_weight_bits that are relevant for fixed-point implementations.
  • The sign_mode argument in the Dense Process is now an enum rather than an integer.
  • New parameters u and v in the LIF Process enable setting initial values for current and voltage.
  • The bias parameter in the LIF Process has been renamed to bias_mant.

Known Issues

  • Lava does currently not support on-chip learning, Loihi 1 and a variety of connectivity compression features such as convolutional encoding.
  • All Processes in a network must currently be connected via channels. Running unconnected Processes using NcProcessModels in parallel currently gives incorrect results.
  • Only one instance of a Process targeting an embedded processor (using CProcessModel) can currently be created. Creating multiple instances in a network, results in an error. As a workaround, the behavior of multiple Processes can be fused into a single CProcessModel.
  • Direct channel connections between Processes using a PyProcessModel and NcProcessModel are not supported.
  • In the scenario that InputAxons are duplicated across multiple cores and users expect to inject spikes based on the declared port size, then the current implementation leads to buffer overflows and memory corruption.
  • Channel communication between PyProcessModels is slow.
  • The Lava Compiler is still inefficient and in need of improvement to performance and memory utilization.
  • Virtual ports are only supported between Processes using PyProcModels, but not between Processes when CProcModels or NcProcModels are involved. In addition, VirtualPorts do not support concatenation yet.
  • Joining and forking of virtual ports is not supported.
  • The Monitor Process does currently only support probing of a single Var per Process implemented via a PyProcessModel. The Monitor Process does currently not support probing of Vars mapped to NeuroCores.
  • Despite new docstrings, type annotations, and parameter descriptions to most of the public user-facing API, some parts of the code still have limited documentation and are missing type annotations.

What's Changed

Thanks to our Contributors

  • Intel Corporation: All contributing members of the Intel Neuromorphic Computing Lab

Open-source community:

New Contributors

Full Changelog: v0.3.0...v0.4.0
The release of Lava v0.4.0 brings initial support to compile and run models on Loihi 2 via Intel’s cloud hosted Oheo Gulch and Kapoho Point systems. In addition, new tutorials and documentation explain how to build Lava Processes written in Python or C for CPU and Loihi backends.
While this release offers few high-level application examples, Lava v0.4.0 provides major enhancements to the overall Lava architecture. It forms the basis for the open-source community to enable the full Loihi feature set, such as on-chip learning, convolutional connectivity, or accelerated spike IO. The Lava Compiler and Runtime architecture has also been generalized allowing extension to other backends or neuromorphic processors. Subsequent releases will improve compiler performance and provide more in-depth documentation as well as several high-level coding examples for Loihi, such as real-world applications spanning multiple chips.
The public Lava GitHub repository (https://github.com/lava-nc/lava) continues to provide all the features necessary to run Lava applications on a CPU backend. In addition, it now also includes enhancements to enable Intel Loihi support. To run Lava applications on Loihi, users need...

Read more

Lava 0.3.0

09 Mar 16:37
f6583e0
Compare
Choose a tag to compare

Lava 0.3.0 includes bug fixes, updated documentation, improved error handling, refactoring of the Lava Runtime and support for sigma delta neuron enconding and decoding.

New Features and Improvements

  • Added sigma delta neuron encoding and decoding support (PR #180, Issue #179)
  • Implementation of ReadVar and ResetVar IO process (PR #156, Issue #155)
  • Added Runtime handling of exceptions occuring in ProcessModels and the Runtime now returns exeception stack traces (PR #135, Issue #83)
  • Virtual ports for reshaping and transposing (permuting) are now supported. (PR #187, Issue #185, PR #195, Issue #194)
  • A Ternary-LIF neuron model was added to the process library. This new variant supports both positive and negative threshold for processing of signed signals (PR #151, Issue #150)
  • Refactored runtime to reduce the number of channels used for communication(PR #157, Issue #86)
  • Refactored Runtime to follow a state machine model and refactored ProcessModels to use command design pattern, implemented PAUSE and RUN CONTINOUS (PR #180, Issue #86, Issue #52)
  • Refactored builder to its own package (PR #170, Issue #169)
  • Refactored PyPorts implementation to fix incomplete PyPort hierarchy (PR #131, Issue #84)
  • Added improvements to the MNIST tutorial (PR #147, Issue #146)
  • A standardized template is now in use on new Pull Requests and Issues (PR #140)
  • Support added for editable install (PR #93, Issue #19)
  • Improved runtime documentation (PR #167)

Bug Fixes and Other Changes

Breaking Changes

  • No breaking changes in this release

Known Issues

  • No support for Intel Loihi
  • CSP channels process communication, implemented with Python multiprocessing, needs improvement to reduce the overhead from inter-process communication to approach native execution speeds of similar implementations without CSP channel overhead
  • Virtual ports for concatenation are not supported
  • Joining and forking of virtual ports is not supported
  • A Monitor process cannot monitor more than one Var/InPort of a process, as a result multi-var probing with a singular Monitor process is not supported
  • Limited API documentation

What's Changed

Thanks to our Contributors

Intel Corporation: All contributing members of the Intel Neuromorphic Computing Lab

Open-source community: (Ismael Balafrej, Matt Einhorn)

New Contributors

Full Changelog: v0.2.0...v0.3.0

Lava 0.2.0

29 Nov 15:18
6507e03
Compare
Choose a tag to compare

Lava 0.2.0 includes several improvements to the Lava Runtime. One of them improves the performance of the underlying message passing framework by over 10x on CPU. We also added new floating-point and Loihi fixed-point PyProcessModels for LIF and DENSE Processes as well as a new CONV Process. In addition, Lava now supports remote memory access between Processes via RefPorts which allows Processes to reconfigure other Processes. Finally, we added/updated several new tutorials to address all these new features.

Features and Improvements

  • Refactored Runtime and RuntimeService to separate the MessagePassingBackend from the Runtime and RuntimeService itself into its own standalone module. This will allow implementing and comparing the performance of other implementations for channel-based communication and also will enable true multi-node scaling beyond the capabilities of the Python multiprocessing module (PR #29)
  • Enhanced execution performance by removing busy waits in the Runtime and RuntimeService (Issue #36 & PR #87)
  • Enabled compiler and runtime support for RefPorts which allows remote memory access between Lava processes such that one process can reconfigure another process at runtime. Also, remote-memory access is based on channel-based message passing but can lead to side effects and should therefore be used with caution. See Remote Memory Access tutorial for how RefPorts can be used (Issue #43 & PR #46).
  • Implemented a first prototype of a Monitor Process. A Monitor provides a user interface to probe Vars and OutPorts of other Processes and records their evolution over time in a time series for post-processing. The current Monitor prototype is limited in that it can only probe a single Var or OutPort per Process. (Issue #74 & PR #80). This limitation will be addressed in the next release.
  • Added floating point and Loihi-fixed point PyProcessModels for LIF and connection processes like DENSE and CONV. See issue #40 for more details.
  • Added an in-depth tutorial on connecting processes (PR #105)
  • Added an in-depth tutorial on remote memory access (PR #99)
  • Added an in-depth tutorial on hierarchical Processes and SubProcessModels ()

Bug Fixes and Other Changes

  • Fixed a bug in get/set Var to enable get/set of floating-point values (Issue #44)
  • Fixed install instructions (setting PYTHONPATH) (Issue #45)
  • Fixed code example in documentation (Issue #62)
  • Fixed and added missing license information (Issue #41 & Issue #63)
  • Added unit tests for merging and branching In-/OutPorts (PR #106)

Known Issues

  • No support for Intel Loihi yet.
  • Channel-based Process communication via CSP channels implemented with Python multiprocessing improved significantly by >30x . However, more improvement is still needed to reduce the overhead from inter-process communication in implementing CSP channels in SW and to get closer to native execution speeds of similar implementations without CSP channel overhead.
  • Errors from remote system processes like PyProcessModels or the PyRuntimeService are currently not thrown to the user system process. This makes debugging of parallel processes hard. We are working on propagating exceptions thrown in remote processes to the user.
  • Virtual ports for reshaping and concatenation are not supported yet.
  • A single Monitor process cannot monitor more than one Var/InPort of single process, i.e., multi-var probing with single Monitor process is not supported yet.
  • Still limited API documentation.
  • Non-blocking execution mode not yet supported. Thus Runtime.pause() and Runtime.wait() do not work yet.

What's Changed

New Contributors

Full Changelog: v0.1.1...v0.2.0

Lava 0.1.1

12 Nov 20:43
dd6a09b
Compare
Choose a tag to compare

Minor release, mostly typo fixes and license updates.

Notes

  • Source directory has moved from lava to src/lava

What's Changed

New Contributors

  • @ashishrao7 made their first contribution in #14
  • @srrisbud made their first contribution in #22
  • @jlakness-intel made their first contribution in #27

Full Changelog: v0.1.0...v0.1.1