Skip to content

Regression Tutorial #1: random thresholds? #199

Answered by jeshraghian
rmazzier asked this question in Q&A
Discussion options

You must be logged in to vote

Great question! It hasn't actually been empirically tested in much rigour.

This paper by Perez-Nieves et al. demonstrated that randomly initialized decay rates outperforms a global decay rate.

Our guess is that having a diverse set of dynamics amongst neurons can help model different types of data. A slow decay = good for long-range time dependencies. A fast decay = short-term time dependencies.

Applying similar logic to random thresholds, perhaps the weights attached to neurons with small thresholds will become stronger if particular features need high firing rates.

Just theorizing - but hopefully this gives you some intuition on why it might be useful!

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by rmazzier
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants