Skip to content
Randall O'Reilly edited this page May 4, 2019 · 1 revision

Go emergent Design

  • In general, emergent works by compiling programs into executables which you then run like any other executable. This is very different from the C++ version of emergent which was a single monolithic program attempting to have all functionality built-in. Instead, the new model is the more prevalent approach of writing more specific code to achieve more specific goals, which is more flexible and allows individuals to be more in control of their own destiny..

    • To make your own simulations, start with e.g., the examples/leabra25ra/ra25.go code (or that of a more appropriate example) and copy that to your own repository, and edit accordingly.
  • The emergent repository contains a collection of packages supporting the implementation of biologically-based neural networks. The main package is emer which specifies a minimal abstract interface for a neural network. The etable etable.Table data structure (DataTable in C++) is in a separate repository under the overall emer project umbrella, as are specific algorithms such as leabra which implement the emer interface.

  • Go uses interfaces to represent abstract collections of functionality (i.e., sets of methods). The emer package provides a set of interfaces for each structural level (e.g., emer.Layer etc) -- any given specific layer must implement all of these methods, and the structural containers (e.g., the list of layers in a network) are lists of these interfaces. An interface is implicitly a pointer to an actual concrete object that implements the interface. Thus, we typically need to convert this interface into the pointer to the actual concrete type, as in:

func (nt *Network) InitActs() {
	for _, ly := range nt.Layers {
		if ly.IsOff() {
			continue
		}
		ly.(*Layer).InitActs() // ly is the emer.Layer interface -- (*Layer) converts to leabra.Layer
	}
}
  • The emer interfaces are designed to support generic access to network state, e.g., for the 3D network viewer, but specifically avoid anything algorithmic. Thus, they should allow viewing of any kind of network, including PyTorch backprop nets.

  • There are 3 main levels of structure: Network, Layer and Prjn (projection). The network calls methods on its Layers, and Layers iterate over both Neuron data structures (which have only a minimal set of methods) and the Prjns, to implement the relevant computations. The Prjn fully manages everything about a projection of connectivity between two layers, including the full list of Syanpse elements in the connection. There is no "ConGroup" or "ConState" level as was used in C++, which greatly simplifies many things. The Layer also has a set of Pool elements, one for each level at which inhibition is computed (there is always one for the Layer, and then optionally one for each Sub-Pool of units (Pool is the new simpler term for "Unit Group" from C++ emergent).

  • Layers have a Shape property, using the etensor.Shape type (see etable package), which specifies their n-dimensional (tensor) shape. Standard layers are expected to use a 2D Y*X shape (note: dimension order is now outer-to-inner or RowMajor now), and a 4D shape then enables Pools ("unit groups") as hypercolumn-like structures within a layer that can have their own local level of inihbition, and are also used extensively for organizing patterns of connectivity.

GUI

A key design goal is retaining as much of the power of the C++ emergent GUI as possible, while also enabling the kind of self-contained, single-program coding as articulated above. Here are a few principles:

  • The program should be in control of the GUI, presenting and configuring the initial GUI that the user sees, to allow complete customization and extensibility.

  • Using the GUI to create / modify the GUI is a very powerful mode of operating, but this can conflict with the first principle. The main way around this is if GUI elements can generate code that captures the user's modifications. Ideally, this could be intelligently and automatically merged into the program, but short of that, perhaps a simple copy / paste functionality would suffice.

  • It is easy to add one or more gi.TabView elements to contain various different gui / graph / view elements, but again there is a question of to what extent everything is all pre-programmed vs. the user can create new graphs, etc on the fly? In the data-analysis mode of operating, being able to do things dynamically and interactively seems pretty key. This is where Jupyter and Python etc really shine, especially in navigating this code vs. gui dynamic -- you dynamically write code to dynamically create your graphs etc. Given the compiled nature of Go, that is not possible, and is where the original vision for Gide comes in.

    • At this point, our basic answer is: run the model in Go (or Python if you want) and then do all your analysis in an interactive tool like Jupyter. Perhaps at some point, Gide can be upgraded to support this functionality, but it is not critical path..

3D Views

In C++ emergent, the 3D view infrastructure supports multiple 3D view elements in one view, e.g., a GridView showing the input as an image, a GraphView plotting training progress, all surrounding the NetView. Likewise we added GraphView's to the virtual environment in the cerebellum model. But this made things very complicated, and in general people struggled with arranging things in 3D..

Alternative is to just have each 3D view do one thing only, and arrange multiple such 3D views together using standard 2D layout tech to get multiple views. This may sacrifice various cool-looking displays and optimization of display space, but is likely to be overall more usable and compatible with overall framework.