You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In order to fit a spatio-temporal receptive field of a neuron (for example in response to a visual input) we may need to assume that the temporal and the spatial contribution to avoid over-parametrization.
The problem of non-factorized receptive fileds is the input would be a tensor of shape $(T, M, N)$, with T the time points, M,N samples or number of basis. this can be very high dimensional and prone to over-fitting.
If we assume spatial and temporal factorization, we reduce to $T + MN$, which is more manageble, however the log-rate will look something like:
$$\log(\mu) = \mathbf{\alpha}^\top M \beta + \text{other predictors}$$
This is a quadratic form in $\alpha \in \mathbb{R}^T$ and $\beta \in \mathbb{R}^{MN}$, i.e. is non-linear model and the parameters interacts. One way to fit this is by alternating gradient descent (alternating between $\alpha$ and $\beta$); each sub-problem is linear and convex.
The text was updated successfully, but these errors were encountered:
In order to fit a spatio-temporal receptive field of a neuron (for example in response to a visual input) we may need to assume that the temporal and the spatial contribution to avoid over-parametrization.
The problem of non-factorized receptive fileds is the input would be a tensor of shape$(T, M, N)$ , with T the time points, M,N samples or number of basis. this can be very high dimensional and prone to over-fitting.
If we assume spatial and temporal factorization, we reduce to$T + MN$ , which is more manageble, however the log-rate will look something like:
This is a quadratic form in$\alpha \in \mathbb{R}^T$ and $\beta \in \mathbb{R}^{MN}$ , i.e. is non-linear model and the parameters interacts. One way to fit this is by alternating gradient descent (alternating between $\alpha$ and $\beta$ ); each sub-problem is linear and convex.
The text was updated successfully, but these errors were encountered: