You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Here some thoughts for people that wants to compare responses represented by non-orthogonal basis (raised cosines, b-splines, m-splines...)
if the tuning function $f_i(x) = b_i(x) \beta_i$ for $i=1,2$. $b_i$ and $\beta_i$ are the vector of basis function and of coefficients, then one can compute the Hilbert space L2 distance between the tuning function only based on the coefficients, all we need to compute is the matrices $B_{ij} = \int b_i(x) b_j(x)^T dx$.
If the basis is shared between the two responses (should be true for most cases I can think of), we could provide a method to nemos.basis.Basis that compute the integral $\int b(x)b(x)^T dx$.
Below some sketched code to exemplify how this works in the case of different basis function or same basis
importnemosasnmofromscipy.integrateimportsimpsimportnumpyasnpimportmatplotlib.pyplotasplt# define grid of points for numerical integrationx=np.linspace(0,1,10**5)
# define two basis functionsbasis1=nmo.basis.RaisedCosineBasisLog(10)
basis2=nmo.basis.BSplineBasis(8)
coeff1=np.random.normal(size=10)
coeff2=np.random.normal(size=8)
# define tuningfunc1=lambdax: basis1.evaluate(x).dot(coeff1)
func2=lambdax: basis2.evaluate(x).dot(coeff2)
# compute numerically the L2 distance between tuning function# in Hilbert space.l2_integral_dist=lambdax: simps((func1(x) -func2(x))**2, dx=x[1]-x[0])
# compute the integral of the outer products of the basis# (should be done with analytically or other better numerical method)B11=np.einsum("ti,tj->ij", basis1.evaluate(x), basis1.evaluate(x)) * (x[1]-x[0])
B12=np.einsum("ti,tj->ij", basis1.evaluate(x), basis2.evaluate(x)) * (x[1]-x[0])
B22=np.einsum("ti,tj->ij", basis2.evaluate(x), basis2.evaluate(x)) * (x[1]-x[0])
B=np.block([[B11, B12], [B12.T, B22]])
# compute the distance based on coefficient only# the difference is due to the poor numerical approx of the # the integral of the basis product.coeff_stack=np.hstack((coeff1, -coeff2))
print("distance coeff based:", coeff_stack @ B @ coeff_stack)
print("distance integral based:", l2_integral_dist(x))
# if you can assume that the basis is the same (often the case)# computing the distance is equivalent to changing the metricbasis=nmo.basis.RaisedCosineBasisLog(10)
coeff1=np.random.normal(size=10)
coeff2=np.random.normal(size=10)
# define tuningfunc1=lambdax: basis.evaluate(x).dot(coeff1)
func2=lambdax: basis.evaluate(x).dot(coeff2)
l2_integral_dist=lambdax: simps((func1(x) -func2(x))**2, dx=x[1]-x[0])
B=np.einsum("ti,tj->ij", basis.evaluate(x), basis.evaluate(x)) * (x[1]-x[0])
print("distance coeff based:", (coeff1-coeff2) @ B @ (coeff1-coeff2))
print("distance integral based:", l2_integral_dist(x))
# if the basis is orthogonal B is the spherical and # we revert back to the norm of the coefficentbasis=nmo.basis.OrthExponentialBasis(10, range(1,11))
B=np.einsum("ti,tj->ij", basis.evaluate(x), basis.evaluate(x)) * (x[1]-x[0])
plt.imshow(B)
plt.colorbar()
The text was updated successfully, but these errors were encountered:
Here some thoughts for people that wants to compare responses represented by non-orthogonal basis (raised cosines, b-splines, m-splines...)
if the tuning function$f_i(x) = b_i(x) \beta_i$ for $i=1,2$ . $b_i$ and $\beta_i$ are the vector of basis function and of coefficients, then one can compute the Hilbert space L2 distance between the tuning function only based on the coefficients, all we need to compute is the matrices $B_{ij} = \int b_i(x) b_j(x)^T dx$ .
If the basis is shared between the two responses (should be true for most cases I can think of), we could provide a method to nemos.basis.Basis that compute the integral$\int b(x)b(x)^T dx$ .
Below some sketched code to exemplify how this works in the case of different basis function or same basis
The text was updated successfully, but these errors were encountered: