You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Using either an ensemble or a neuron as the post object of a connection changes the what is included in the connection's weights. If the post object is an ensemble, the connection weights uses scaled_encoders, which in turn includes the neuron gains. However, if the post object are neurons, the connection weights do not include the neuron gains, since they are computed separately here.
While this difference can be compensated for by factoring for the neuron gains when making an x-to-neuron connection, this bug does cause a difference in behaviour when using learning rules that are configured to make changes to the connection weights (as opposed to the encoders or decoders).
To reproduce
Illustrating the difference in the connection weights composition is illustrated with the code below. In this code, a seeded model is created, and the full connection weight matrix is generated using the LstsqL2 solver. These weights are saved, and then loaded in an almost identical model, where the only difference is that the connection used is a neuron-to-neuron connection. The weights from the first model are passed to the second model using the transform parameter on the connection.
Crucially, to demonstrate that the neuron gains are the differing component in the two weight matrices, the post ensemble is configured to used a fixed gain for all of its neurons.
importnengoimportnumpyasnpimportmatplotlib.pyplotasplt# Use fixed gain for all neurons in post ensemblegain=0.5# Create the first seeded Nengo modelwithnengo.Network(seed=0) asmodel:
input_node=nengo.Node(lambdat: np.sin(2*np.pi*t))
pre=nengo.Ensemble(100, 1, neuron_type=nengo.RectifiedLinear(), label="pre1")
post=nengo.Ensemble(
100, 1, gain=nengo.dists.Choice([gain]), bias=nengo.dists.Uniform(0, 100),
neuron_type=nengo.RectifiedLinear(), label="post1",
encoders=nengo.dists.Choice([[1]])
)
nengo.Connection(input_node, pre)
# Create the connection using ensemble objects, but solve for the full weight matrixconn=nengo.Connection(pre, post, solver=nengo.solvers.LstsqL2(weights=True))
p_out=nengo.Probe(post)
withnengo.Simulator(model) assim:
# Save the connection weightsweights1=sim.model.params[conn].weightssim.run(1)
# Create the second seeded Nengo model. Everything is identical save the connection# between pre and postwithnengo.Network(seed=0) asmodel2:
input_node2=nengo.Node(lambdat: np.sin(2*np.pi*t))
pre2=nengo.Ensemble(100, 1, neuron_type=nengo.RectifiedLinear(), label="pre2")
post2=nengo.Ensemble(
100, 1, gain=nengo.dists.Choice([gain]), bias=nengo.dists.Uniform(0, 100),
neuron_type=nengo.RectifiedLinear(), label="post2",
encoders=nengo.dists.Choice([[1]])
)
nengo.Connection(input_node2, pre2)
# Create the connection using a neuron-to-neuron connectionconn2=nengo.Connection(pre2.neurons, post2.neurons, transform=weights1)
p_out2=nengo.Probe(post2)
withnengo.Simulator(model2) assim2:
sim2.run(1)
plt.figure()
plt.plot(sim.trange(), sim.data[p_out], label="solver")
plt.plot(sim2.trange(), sim2.data[p_out2], label="n2n transform")
plt.legend()
plt.show()
Expected behavior
If the connection weights were being interpreted by Nengo identically in both cases (i.e., using ensemble as a post object, and using neurons as a post object), the code above should produce identical plots. However, because weights1 includes the gains in it, and conn2 does an additional multiplication of the neuron gains, what we see is that the output of post2 is scaled by whatever value gain is given:
Versions
OS: WSL Ubuntu 18.04 on Windows 10
Python: Miniconda Python 3.8.5
Nengo: latest master
The text was updated successfully, but these errors were encountered:
To illustrate the impact this has on learning rules, the following code (adapted from @arvoelke notebook example) by @tcstewar) illustrates the output of 4 almost identical Nengo networks (identical except for the connection between pre and post). Mathematically, each network is identical (so each plot should be the same), but because of the difference in what is contained with the connection weight matrix, the PES learning rule produces different outputs.
import numpy as np
import nengo
import matplotlib.pyplot as plt
def go(switch, learning_rate=1e-4, gain=1, seed=0, weights=True):
with nengo.Network(seed=seed) as model:
print(switch, weights)
stim = nengo.Node(output=lambda t: np.sin(2 * np.pi * t))
pre = nengo.Ensemble(100, 1, neuron_type=nengo.RectifiedLinear())
post = nengo.Ensemble(
100, 1, gain=nengo.dists.Choice([gain]), bias=nengo.dists.Uniform(0, 100),
neuron_type=nengo.RectifiedLinear()
)
error = nengo.Node(size_in=1)
nengo.Connection(stim, pre, synapse=None)
if switch:
conn = nengo.Connection(
pre, post, solver=nengo.solvers.LstsqL2(weights=weights), transform=0,
)
else:
if weights:
transform = np.zeros((post.n_neurons, pre.n_neurons))
conn = nengo.Connection(
pre.neurons, post.neurons, transform=transform,
)
else:
transform = np.zeros((1, pre.n_neurons))
conn = nengo.Connection(
pre.neurons, post, transform=transform,
)
conn.learning_rule_type = nengo.PES(learning_rate=learning_rate)
nengo.Connection(post, error, synapse=None)
nengo.Connection(stim, error, synapse=None, transform=-1)
nengo.Connection(error, conn.learning_rule)
p = nengo.Probe(post)
with nengo.Simulator(model, progress_bar=False) as sim:
sim.run_steps(1000)
return sim.data[p]
plt.figure(figsize=(8, 8))
plt.suptitle("Probed output of post after 1s of learning", fontsize=16)
ax = None
for switch in [True, False]:
for weights in [True, False]:
ax = plt.subplot(2, 2, switch * 2 + weights + 1, sharey=ax)
m = go(switch=switch, weights=weights, gain=0.5)
plt.plot(m)
plt.title('n2n=%s weights=%s' % (not switch, weights))
plt.tight_layout()
plt.show()
- Separates the gain computation when using weights=True with a
solver.
- Behaviour of connection should now be consistent regardless of
how the connection has been created (with or without solvers)
See issue #1639
Describe the bug
Using either an ensemble or a neuron as the
post
object of a connection changes the what is included in the connection's weights. If thepost
object is an ensemble, the connection weights usesscaled_encoders
, which in turn includes the neuron gains. However, if thepost
object are neurons, the connection weights do not include the neuron gains, since they are computed separately here.While this difference can be compensated for by factoring for the neuron gains when making an x-to-neuron connection, this bug does cause a difference in behaviour when using learning rules that are configured to make changes to the connection weights (as opposed to the encoders or decoders).
To reproduce
Illustrating the difference in the connection weights composition is illustrated with the code below. In this code, a seeded model is created, and the full connection weight matrix is generated using the
LstsqL2
solver. These weights are saved, and then loaded in an almost identical model, where the only difference is that the connection used is a neuron-to-neuron connection. The weights from the first model are passed to the second model using thetransform
parameter on the connection.Crucially, to demonstrate that the neuron gains are the differing component in the two weight matrices, the post ensemble is configured to used a fixed gain for all of its neurons.
Expected behavior
If the connection weights were being interpreted by Nengo identically in both cases (i.e., using ensemble as a post object, and using neurons as a post object), the code above should produce identical plots. However, because
weights1
includes the gains in it, andconn2
does an additional multiplication of the neuron gains, what we see is that the output ofpost2
is scaled by whatever valuegain
is given:Versions
The text was updated successfully, but these errors were encountered: