-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Recursive Least Squares with Exponential Forgetting #75
Comments
Code for up to 4x4 is there in C. Above 4 is not practical in reality because we could use the rolling regression only when we are "quite confident" in the model and already have some meaningful interpretation of betas, which we re-estimate in online mode. Have never seen a trustworthy linear regression with more than 2-3 real variables, not dummy ones. (Interesting, but a theoretical at the moment, question is dummy variables, for which we could monitor rolling t-stats just as a signal that some binary factor becomes significant - there could be many of them. However, this usully works on large data and in exploratory mode while refining the "true" meaningful model.) https://github.com/niswegmann/small-matrix-inverse |
|
|
…erformance test vs non-online algo. Also need weights, probably some multiplier to reduce weights in the similar way to exponential forgetting. But won't do the original exponential version because it depends on the starting point.
|
The state is not cached, we recalculate lagged xpxi on every move since it is done via zipn, but that is straightforward to fix via manual impl of BindCursor. The math is the same. |
But we do just twice the work of M size, not N, so this is already online also. |
Review dotnet/corefx#31779 |
https://en.wikipedia.org/wiki/Sherman%E2%80%93Morrison_formula
https://en.wikipedia.org/wiki/Woodbury_matrix_identity
In the original implementation here only (X |
Rewrite this R implementation in efficient non-allocating .NET code https://gist.github.com/buybackoff/bfeb9c8959157dbf86310d383c93f00d
NB since on each step matrix sizes are limited by the number of X variables, manual implementation of mmult/minverse could be cheaper than calling any "optimized" library.
The text was updated successfully, but these errors were encountered: