Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

New way of handling convolutions #1020

Open
clinssen opened this issue Apr 4, 2024 · 1 comment · May be fixed by #1050
Open

New way of handling convolutions #1020

clinssen opened this issue Apr 4, 2024 · 1 comment · May be fixed by #1050

Comments

@clinssen
Copy link
Contributor

clinssen commented Apr 4, 2024

I propose to handle convolutions by means of a transformer. For each convolve() call, one or more new state variables will be generated (as is currently also the case with the __X__ syntax). A new event handler (onReceive block) will be generated that increments the new state variables when a spike arrives. The priority of the handler will be clearly defined to be the highest of all (or the lowest of all) event handlers, so that the rest of the code (update block and other event handlers) get a consistent "just before" or "just after" value.

For instance,

state:
  V_m mV = 0 mV

update:
  kernel K = exp(-t / tau_syn)
  V_m' = -V_m/tau_m + convolve(K, spikes)

would be transformed into

state:
  V_m mV = 0 mV
  K__conv__spikes real = 0

update:
  V_m' = -V_m/tau_m + K__conv__spikes
  K__conv__spikes' = -K__conv__spikes / tau_syn

@priority(... lowest or highest ...)
onReceive(spikes):
  K__conv__spikes += spikes   # bump by spike weight

This would also hold for delta kernels. When using a delta kernel, we do not need a new state variable, but should just increment $V_m$ directly; however, this turns out to be very difficult to implement (what if, for instance, the convolve() call appears inside of a complex mathematical expression?), so there would be some overhead in terms of an extra, in principle potentially unnecessary state variable—but at least the behaviour would be consistent.

Fixes #993 and #1013.

@clinssen
Copy link
Contributor Author

clinssen commented May 8, 2024

N.B.: this means there will probably be two passes through ODE-toolbox: first, the kernel equations are given to ODE-toolbox (as a function of time, first, or higher-order ODE), and returned as a set of first-order ODEs, which are added into the model. During code generation, the system of equations (including the new ODEs that derive from the kernels) are passed to ODE-toolbox again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant