Skip to content
David edited this page May 18, 2024 · 13 revisions

Nerlnet is a distributed machine learning platform developed for both research and deployment on real edge hardware devices.​
Nerlnet is highly configurable and a very modular platform, it supports different kinds of communication protocols and has a simple python API for experiment management.​
Erlang is the core of the distributed system of Nerlnet. The network of Nerlnet is based on Cowboy and it is based on common network modules: Servers, Routers and Clients. Clients can host sources that generate data in the network, and workers that execute neural networks implemented in C++, such as cppSANN, under IoT hardware constraints.

Nerlnet architecture:

The left side of the figure describes communication between API Python server and Nerlnet Main-Server.
The ApiServer loads distributed configurations of experiment from a file (dc_.json file) and sends configurations to Nerlnet’s MainServer that spreads it through devices. Each device builds its entities and notifies the ApiServer through the MainServer that it is ready for running experiment phases.

On the right side of the figure there are the MainServer and distributed ML cluster that consists of entities as defined in JSON configuration file (Routers, Sources, and Workers – Edge Compute Devices).
Red communication lines are dedicated for monitoring and statistics collection from the distributed ML cluster.
Blue arrows are communication lines of the distributed network.
Distributed network components such as router, edge compute unit and sensors (or data generators) can be deployed on any hardware and even multiple components per hardware, depends on model complexity and compute constraints of the edge device.