Skip to content

Implementations of various simulations for integrate and fire models, as well as conductance based models with synaptic neurotransmission

Notifications You must be signed in to change notification settings

NikhilMukraj/spiking-neural-networks

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Spiking Neural Networks

Generalized spiking neural network system with various intergrate and fire models as well as Hodgkin Huxley models, EEG processing with fourier transforms, and power spectral density calculations

Biological Neuron Models Broken Down

  • (todo...)
  • (explanation of integrate and fire)
  • (explanation of izhikevich)
  • (explaination of hodgkin huxley)
  • (explanation of ion channels)
  • (explanation of neurotransmission, how hodgkin hux system is adapted for izhikevich, explain why receptor kinetics are fixed)

Notes

  • To fit Izhikevich neuron to Hodgkin Huxley model, can either:

    • Fit voltage changes in Izhikevich to voltage changes in Hodgkin Huxley
    • Fit Izhikevich curve to Hodgkin Huxley
      • Can either use a Fourier transform to compare or use mean squared error at each iteration
      • Or, compare the difference between spike times and the amplitude of the spikes (spike time difference being post minus pre, could compare individual spike differences or average spike difference)
        • Write trait for iterate function so coupled neuron firing code can be shared between Hodgkin Huxley and Izhikevich neurons
        • Cell might need a rename to IntegrateAndFireModel
        • Perform this for multiple static inputs, 0 to 100
        • Or perform this with coupled neurons (might need to account for weights)
        • Or both at the same time
      • Fitting bursting Izhikevich to bursting Hodgkin Huxley
        • Need way to detect bursts for Izhikevich fitting, probably something that has a burst tolerance (distance between spikes that differentiates either part of a burst group or the next set of bursts firing)
        • Then comparing the distance between burst groups and the intervals of the burst groups
  • Can also implement version that either adds neurotransmitter current or adds the current to stimulus

  • Subtract 70 mv (or n) from Hodgkin Huxley in fitting for peak amplitudes

  • Eventually remove old neurotransmitter system and replace it with new one

  • Eventually remove existing genetic algorithm fit for matching an EEG signal and replace it with R-STDP one or at least genetic algorithm that changes weights rather that input equation

  • Add neurotransmitter output to each presynaptic neuron that calculates concentration with its own membrane potential, then have postsynaptic neurons sum the concentration * weight to calculate their neurotransmitters

  • Separate receptor kinetics struct, dependent on $t_total$

    • Receptor kinetics input (of weighted neurotransmitter concentration) should only be calculated if receptor kinetics is not static
      • Receptor kinetics handling should have an inputtable value to set r at
        • If receptor kinetics are static do not calculate neurotransmitter input
    • Neurotransmitter and receptor kinetics structs should be stored in different hashmaps that relate an enum (specifying the type, basic, AMPA, NMDA, GABAa, GABAb) to the struct (the struct would then have the appriopriate parameters associated with that enum)
      • HashMap<NeurotransmitterType, Neurotransmitter>, HashMap<NeurotrnasmitterType, ReceptorKinetics>
  • Neurotransmitter current should be calculated after $dv$ and $dw$ are calculated and applied when those respective changes are applied, iterate_and_spike function should be modified to take in an addition change in voltage which can be applied with the $dv$ calculation so it can be added before the spike is handled

    • ie add argument to iterate_and_spike which is an Option<f64> called additional_dv that adds the $dv$ change calculated by the neurotransmitter current after neurotransmitter currents are set and returned
      • Get presynaptic neurotransmitter concentrate
        • Multiply by receptor value
        • Generating noise factor from bayesian parameters (outside of neuron) and then applying that noise to input current and input neurotransmitter (for sake of handling Hodgkin Huxley and integrate and fire)
          • Noise should be applied to total input current and total input neurotransmitter
      • Calculate $dv$ change from neurotransmitter current
      • Add it to the voltage in the iterate_and_spike function
    • Or integrate the neurotransmitter calculation into the iterate_and_spike function
    • Or into a seperate iterate_and_spike_with_neurotransmission function
      • pub fn iterate_and_spike_with_neurotransmission(&mut self, i: f64, t_total: Option<HashMap<NeurotransmitterType, f64>>) -> bool
      • If t_total is Some then update receptor kinetics, if t_total is None then do not change receptor kinetics
        • If t_total is Some but neurotransmitter type in t_total does not match the receptors on the neuron, assume neurotransmitter concentrate is 0
      • In lattice, return (f64, Option<HashMap<NeurotransmitterType, f64>>), if receptor_kinetics is false, return (f64, None)
    • Old update neurotransmitter function should be removed in favor of this
  • When neurotransmitter refactor done, refactor fitting neurons have the option to only consider the postsynaptic neuron when implementing neurotransmission schemes

  • Another refactor for neurotransmission, neurotransmitter and receptors should be a trait that structs can implement

    • Approximation of neurotransmission for integrate and fire cells
      • When presynaptic neuron spikes, receptor value is set to $r_max$ and then slowly decays over time (r change is basically just t change here where t, the neurotransmitter, decays over time)
        • Spike detection could be done in same manner as Hodgkin Huxley spike detection where neurotrasmitter struct will keep track of the last voltage and whether it was increasing or not
      • Modifier from receptor type (GABAb modifier and NMDA modifier) should still be applied to currents
  • Redo obsidian notes with new code

  • When neurotransmitter refactor done, move to Hopfield network or lixirnet or Poisson/spike train or FitzHugh-Nagumo

  • Add $\tau_m$ and $C_m$ to fitting parameters

  • Add option to subtract 70 mV to set resting potential for Hodgkin Huxley model in fitting

  • Obsidian notes on STDP equations

  • Update code in obsidian when refactor is done, maybe update results

  • Change BufWriter capacity from 8 kb to 4 mb or 8 mb and see if its faster (use with_capacity function)

  • Weighted neuron input should be the only input function

  • Have system to generate new neurotransmitter and receptors from TOML

  • Split main.rs functions into a few different files for readability

  • Printing should have colors

  • Eventually split up integrate and fire types into seperate structs, use macros to share code between structs

  • FitzHugh–Nagumo model (FHN) (with bursting)

  • Spike trains and multiple lattices

    • Spike train trait should return a voltage and iterate with no given input
      • Should implement potentation trait
    • Poisson neuron could be configured to spike with a Poisson distribution, ie it has some random element but over time conforms to some frequency
      • Spike train neurotransmitter should probably just decrease until 0 over time
      • Neurotransmitter approximation with trait refactor necessary
    • A preset spike train struct should have a hashmap that contains when the neuron will spike and to what mangitude
      • Preset spike train should have a hashmap storing when to spike as well as a neurotransmitter hashmap that says the concentration values over time, if hits next spike and neurotransmitter is on then return to beginning of neurotransmitter hashmap
      • Spike train will have internal counter that determines when it spikes that resets when it reaches the end of its period
    • Spike train trait should eventually also have the option to return a neurotransmitter concentration to be used
      • spike train neurotransmitter should probably just decrease until 0 over time
    • Spike train with coupled input
      • Redo Hodgkin Huxley results with spike train coupling and fitting
    • Evenly divided preset spike train
    • Spike train input should be able to be fed into lattices, multiple lattices function should be able to simulate multiple lattices that are connected with one another with different parameters (different plasticity settings, different neuron types etc)
      • Input from other lattice would likely need its own graph relating the lattices as well as a new get inputs function
      • Input hashmap (in lattices) could be modified to have inputs added from other lattices (or spiketrains), input may have to be scaled down or modified such that multiple position input keys exist as vectors, inputs going to the same position could then be summed, bayesian factor could be applied after summation
      • Lattice calculation might want to randomly select certain neurons to be read and updated first
      • Bayesian factor could be used in place of this but bayesian factor should be reduced to +/- 10-5%
  • Refactor fitting to use spike trains with neurotransmission

  • $\tau_m$ and $C_m$ fitting

  • Spike train should evenly divide timing of spikes throughout for consistency sake, less randomness should ensure more accuracy

  • Refactor fitting to subtract -70 mV (or n) when generating Hodgkin Huxley summary

  • Maybe refactor so fitting only takes into account the presynaptic neuron

  • Use Rayon to thread lattice calculations (remove storing dv and is_spiking in hashmap and place it in the struct)

    • Inputs should be calculated in parallel
      • There should be a GraphFunctionalitySynced trait for the lookup weight function, get incoming connections, get outgoing connections, and get every node function, the rest can be under the regular GraphFunctionality trait, the weighted input function should have a &dyn GraphFunctionalitySynced argument
        • Could replace &dyn GraphFuntionality with a generic and a trait
        • Maybe graph should be in an Arc<Mutex<T>>, unlocked to edit weights, would need to be able to pass locked version to get weights without unlocking Mutex
    • Cells could be modified with par_iter_mut or a par_chunk_mut, this part would need to be benchmarked but could modify weights but not in parallel and see if the parallel implemenation is still faster since a majority of the calculation is threaded
      • Update cells by looping over grid
      • Or could parallelize editing of weights by using par_iter_mut to calculate the weights and then applying them
    • Parallel functionality should also be benchmarked
  • EEG testing

    • Determine frequency band of EEG values over 30 seconds (could calculate frequencies seperately in Python with Numpy for now)
    • Leave room for convergence (about 500 steps with $dt=0.1$)
      • Should expect beta or gamma frequencies above 10 hz and below 50 hz
  • Lixirnet should be reworked after neurotransmission refactor, should just pull from backend

    • Update by copying over backend
      • Should have methods that iterate one timestep for each kind of simulation
      • That way when exposed to Python there can be tqdm stuff
    • Use macros to generate getter and setter methods given the argument name
      • For integrate and fire cell and Hodgkin Huxley model
      • Enable multiple-pymethods so the macro can be written
      • Reference for macro
    • For now Lixirnet can work with lattices by converting adjacency matrices in Numpy to Rust
    • Should have an option to convert the matrix to and adjacency list later, or implement a direct conversion from dictionary to adjacency list
    • Lixirnet should expose EEG processing tools
  • Should also be adapted for a cargo package

  • Option to subtract 70 mV (or n mV) from Hodgkin Huxley model to set resting potential at n

  • Input from cell grid functions should be refactored to work with Hodgkin Huxley cells via a trait and condensed into one function where weighting is optional

  • Hopfield network

    • Hopfield network pseudocode
    • Hopfield network tutorial
    • Hopfield network explained
    • Hopfield network needs its own graph representation, should extend graph trait, some of graph trait could be split up so graph used in lattice simulation has functionality for STDP weights while Hopfield static weights don't change, graph trait could also be refactored so min, max, mean, and std can be passed in rather than STDP parameters
    • Hopfield spiking neural network prototype
      • Weights may not need to change over time
  • Simple recurrent coupled neurons (a -> b -> c -> a), test how excitatory/inhibitory input at a single neuron effects the system

    • Try to create a system where input eventually fades away after input is no longer being applied (fading memory)
    • Can decay gap conductance over time after a spike until a small enough value is reached or another spike occurs
    • Could use STDP to see if that slowly eliminates input over time
  • Cue model

    • Cue input is fed into working memory neurons
      • Cue is -1 or 1
    • Working memory neurons loop back into themselves with some bayesian noise
    • Cue is removed and working memory output can be decoded
      • Decoded by taking weighted sum of working memory neurons
      • If below 0, then percieved cue is -1, if above 0, percieved cue is 1
      • Or perceived cue could be above or below a given baseline, cue itself can be a fast (or excitatory) spike train or a slow (or potentially inhibitory) spike train, 0 is a baseline spike train speed (spike train just being a series of spikes)
        • Poisson neuron should be used to generate spike train
        • Might be more practical to use an excitatory and inhibitory input and check deviation from baseline over time
    • Firing rate of neurons increase over time signal should become more unstable over time and starts to not represent the same signal
    • To also model forgetting, increasing amounts of noise can be added to working memory model over time
  • When done with cue models, move to liquid state machines (also accessible here)

    • Creating a stable liquid
      • Should have some consistent methodology to generate a stable liquid (eigenvalue of each weight vector being within unit circle, or rate of decay being within unit circle, try with 80-20 excitatory to inhibitory neuron ratio and then a more equal one), stability should be measured in whether neurons return to a resting firing rate, for simpler classifiers output may not need to feed back into liquid while reinforcement learning tasks may
    • Recurrent connections in reservoir compute act as working memory that stores information through recurrent connections that may slowly degrade over time, target is slowing the degradation in order to improve memory recall
    • Decoding unit acts as readout, decoding unit likely would need some training in the form of R-STDP
    • Can check accuracy of liquid state machine or stability of answer over time, similar to simple reccurent model
    • Can also check for time until convergence as a measure of learning
    • Can also check the stability of liquid as metric
    • Could also check EEG to see if processing is similar to focused brain activity
    • Model of memory using reservoir compute and R-STDP could model effects of dopamine by modulating relevant R-STDP parameters and modulating the neuron parameters as well, could also model effects of drugs by training first and the messing with modulated values
  • When done modeling memory, attempt general classification tasks with liquid state machines

    • Implementations of liquid state machines and reservoir computing (Matlab, Brian2)
  • Liquid state machine + stable attractor

    • Stable attractor connected to reservoir with feedback (going into attractor could be trainable wheras back into liquid is not)
    • Attractor may need to have a stable state that is just all low states
    • Liquid state machine + iscrete attractor, frequency from input neuron is measured over time a high freq means an active state while low freq means inactive, discrete neuron could correspond to a poisson neuron or something similar
    • Testing classifiers and regression models with liquid and attractor model, maybe try multiple attractors
    • Could try this with reinforcement learning models
  • Modeling hallucinations

    • Testing of effect of noise in liquid state machine or Hopfield network and convergence, testing of pruning neuronal connections on convergence
    • Hallucinations are mischaracterization of sensory stimuli (generally the absence of stimuli being mischaracterized as present stimuli) (may need visual or auditory/speech model for lsm) while memory issues are misrecall over time (temporal mischaracterization)
      • (could train model on whether word is detected or not, test what it detects on absence of words and then induce hallucinations conditions)
    • Noise could either be direct bayesian modulation of input or input noise from surrounding poisson neurons (latter may be more accurate)
    • Testing how different converging states are from one another, seeing how different signal to noise ratio is
    • Small world architecture in liquid state machine (various interconnected hubs, ie different connected liquids or stable attractors) effect of cutting off hubs and increasing path size between hubs
    • Liquid state machine could be used to test this as well as spiking Hopfield networks, ideally a liquid state machine with explicit working memory in the form of some connected stable attractor
      • Liquid state machine could either be used to classify a given stimulus (visual or auditory)
      • Hallicunation could be considered when absence of stimuli generate readouts that say there exists auditory stimuli
      • Could also be considered a general misclassification
      • Could also have a liquid state machine generate a grid pattern on readout given an input similar to a Hopfield network
  • Gap junction equation and various models for different currents

  • Phase plane analysis of adaptive $w$ and voltage $v$ values

  • Look into delta rule for learning

  • Implementation details of a Izhikevich R-STDP synapse

Notes on what to modulate

  • Synaptic condutance of ion channels (potentially rate/gating constants)
    • Na+, K+
    • Leak current
    • Ca++ (L-current HVA, T-current)
    • M-current
    • Rectifying channels
  • Synaptic conductance of ligand gated channels (potentially maximal neurotransmitter concentration) (and forward and backward rate constants)
    • AMPA, GABA(a/b), NMDA
  • Metabotropic neurotransmitters (concentration)
    • Dopamine
    • Serotonin
    • Nitric oxide
    • Acetylcholine
    • Glutamate
    • Adrenaline
  • Astrocytes
  • Weights
    • Weights between certain neurons or specific projections (pyramidal or chandelier for example)

(simulation total time should be around 10 min)

Todo

Backend

  • Integrate and fire models
    • Basic
    • Adaptive
    • Adaptive Exponential
    • Izhikevich
    • Izhikevich Leaky Hybrid
  • Static input test
  • STDP test
    • Single coupled neurons
    • Multiple coupled neurons
    • Single coupled R-STDP
      • Note: input spike train is being inputted into input layers, depending on how strongly the output neurons are firing (and which neurons are spiking) reward is applied, this input is being inputted for specific duration it is not instantaneous
    • Multiple coupled R-STDP
    • Testing with weights summing to 1
  • Lattice
    • Graph representation of lattice
      • Adjacency list
      • Adjacency matrix
    • Generating GIFs from lattice
      • Naive approach
      • Optimized GIF generation
    • Different potentiation types
      • Inhibitory
      • Excitatory
    • Recording lattice over time
      • Textual
        • Averaged
        • Grid
        • EEG
      • Binary
        • Averaged
        • Grid
    • Lattice testing without STDP
    • Lattice testing with STDP
    • Lattice with EEG evaluation
      • Analysis with Fourier transforms
        • Calculation of spectral analysis
        • Calculation of Earth moving distance
      • Option to rewrite Fourier analysis to file
    • Function that can simulate more than one lattice that have different parameters but are connected by neurons (for instance one lattice can have plasticity while the other does not)
  • Hodgkin Huxley
    • Basic gating
    • Neurotransmission
      • Systemized method for adding ionotropic neurotransmitters
      • AMPA
      • NMDA
      • GABA
        • GABAa
        • GABAb
          • GABAb primary
          • GABAb secondary
    • Additional gating
      • Systemized method for adding gates
      • L-Type Calcium
      • T-Type Calcium
      • M-current
    • More complex neurotransmission equations (with delay time constants and such)
    • Multicompartmental models
    • Hodgkin Huxley iterate and spike functionality
      • Should implement a trait shared with integrate and fire neuron that iterates the state of the neuron and returns whether it is spiking
      • Should be implemented for coupling test, STDP, and lattice simulation
      • Hodgkin Huxley lattice function should share as much code as possible with integrate and fire function
  • FitzHugh-Nagumo model
  • TOML parsing
    • Integrate and fire parsing
      • Static input
      • STDP testing
      • Lattice
    • Hodgkin Huxley
      • Static input
      • STDP testing
      • Built in neurotransmitters
      • New neurotransmitter from TOML
      • Built in additional gates
      • New gates from TOML
  • Izhikevich neurotransmission
    • Fitting Izhikevich neuron to Hodgkin Huxley model with genetic algorithm
      • Objective function
        • Finding spikes
        • Comparing spikes
          • Amplitude of spikes, spike time differences, and number of spikes
          • Scaling data properly
        • Comparing static and coupled inputs
        • Comparing spikes under various input conditions
      • Spike time concidence objective function
      • Potential objective function refactor with spike amplitude being height subtracted by minimum
      • Fitting with CUDA backend (and transfering this to Python interface)
    • Using existing neurotransmitter framework with Izhikevich as either input stimulus or additional current added on
      • Remove existing neurotranmission system
      • Integrate and fire models with ligand gated channels interacting with neurotransmitters
        • Moving neurotransmitter concentration into seperate struct and moving receptor kinetics variables to seperate struct (with parameter $T_max$)
          • Presynaptic neuron calculates concentration and saves it
          • Post synaptic neuron applies weight to the concentration and sums it, then applies receptor kinetics
            • Neurotransmission current should be calculated with iterate_and_spike function after $dv$ is calculated and before spike is handled, iterate_and_spike should have a input neurotransmitter concentration as an option, if some do neurotransmitter current processing, if None then do not perform neurotransmitter current operation
              • Update this within fitting Izhikevich neuron too
          • Integrate this into Hodgkin Huxley models too
        • Option to record each neurotransmitter current over time in lattice (g * r)
        • Recording g, r, and T over time
          • Coupling tests
          • STDP tests
      • Approximation of neurotransmitter in synapse over time (as well as receptor occupancy over time)
        • $\frac{dT}{dt} = \alpha T + T_{max} H(V_p - V_{th})$ where $T$ is neurotransmitter concentration, $T_{max}$ is maximum neurotransmitter concentration, $\alpha$ is clearance rate, $H(x)$ is the heaviside function, $V_p$ is the average of the presynaptic voltages, and $V_{th}$ is the spiking threshold
        • Receptor occupancy could be assumed to be at maximum
        • Could be implemented with a trait neurotransmitter that has apply neurotransmitter change to apply t and r changes and get r to retrieve modifier
  • Poisson neuron
    • Coupling
      • Potentiation type
  • Spike train struct
    • Given a set of times, the neuron will spike and not spike at given times
      • Vector of times to spike at + delay before first spike + delay after last spike
    • Internal clock starts at 0, and increments every iteration until the end time is reached where it will return to 0
      • Potentiation type
  • Astrocytes model
  • Simulating modulation of other neurotransmitters on lattice
  • Simulation of working memory (refer to guanfacine working memory model)
    • Simple recurrent coupled neurons (a -> b -> c -> a), test how excitatory/inhibitory input at a single neuron effects the system
    • Discrete state neuron
    • Discrete learning rules
    • Hopfield network
    • Simple recurrent memory
    • Liquid state machine
      • Should have a cue and retrieval system (for now supervised, could look into unsupervised methods later)
        • Present cue for duration, remove cue, see how long retrieval signal lasts
          • Matching task, present cue, remove cue, present a new cue and determine whether the new cue is the same or different (DMS task)
        • Could add noise over time similar to simple recurrent memory to modulate forgetting if signal stability stays constant
        • Measure signal stability after cue is removed (see guanfacine paper)
        • Measure ability to complete task, time taken to converge, and potentially liquid stability
      • Could model cognition with something similar to a traveling salesman problem
    • Liquid state machine with astrocytes
    • Neuro-astrocyte memory model
  • Simulation of psychiatric illness
  • Simulation of virtual medications
  • R-STDP based classifier
    • Reward may need to be applied after a grace period so the model can converge on an answer first
    • Simple encoding of input
    • Modifying the bursting parameters to encode more information in input
      • Potentially having weights directly calculated/modified from bursting parameters
    • Liquid state machine with R-STDP
      • Could look into weighted graphs input, each node could a place on a prism geometry and each weight could be a node in between each node, could also be rotated in the geometry for data augmentation purposes
      • Or could input as an adjacency matrix (SMILES enumeration compatible)
    • Liquid state machine with astrocytes and R-STDP
    • Combining input with neurotransmission, encoding certain inputs with more or less neurotransmitter (ionotropic or otherwise)
  • R-STDP based regression
    • Number of spikes in an interval or distance between spikes could act as regression value
      • Additionally multiple outputs from different neurons in the output layer could be summed for a single regression value (for example one neuron could represnt 0-9, another could be 0, 10, 20, ..., 90 and summed together for the full number)
    • Liquid state machine fitting differential equation or time series
      • Potentially physics prediction, parameters of physics simulation could be inputs along with current position, next position could be target to predict
  • Liquid state machine solving of more general cognition problem
    • Traveling salesman
    • Maze solve/navigation to reward
    • Could model general cognition with similar test case and the effect of different neurotransmitters on the efficacy of the solve

Lixirnet

  • Integrate and fire models
    • Basic
    • Adaptive
    • Adaptive Exponential
    • Izhikevich
    • Izhikevich Leaky Hybrid
  • Static input test
  • STDP test
    • Regular STDP
    • R-STDP
  • Lattice
    • Graphs input
      • Adjacency list
      • Adjacency matrix
  • Hodgkin Huxley
    • Basic gating
    • Neurotransmission
    • Additional gating

CUDA

  • Parallel integrate and fire
    • Parallel voltage update
    • Parallel adaptive update
    • Parallel input calculation
  • Parallel Hodgkin Huxley
  • Interfacing from Python

Docs

(see other .md fils)

Results

Lattice

Lattice

Hodgkin Huxley

Neurotransmission

AMPA

GABAa

GABAb

NMDA

Additional Gates

Hopfield Reconstruction

Hopfield Reconstruction of Input Patterns

Sources

  • (todo)
  • izhikevich
  • destexhe
  • antipsychotics sim paper
  • dopamine model with hodgkin huxley
  • biological signal processing richard b wells

About

Implementations of various simulations for integrate and fire models, as well as conductance based models with synaptic neurotransmission

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages