-
Notifications
You must be signed in to change notification settings - Fork 44
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update readme comparison chart #113
Comments
There are two major differences now 1. We should highlight that we have a callable empirical covariance. scikit-learn does not support this. 2. scikit-learn discourages use of random weights for sparse support, a feature previously supported in their randomized_l1 linear regression module. The
See previous implementation here Thus README chart should no longer say "random lasso is available for the regular lasso". I'm not sure anything else needs to change. The random lasso by re-weighting predictors is available through an auxiliary package https://github.com/scikit-learn-contrib/stability-selection/blob/master/stability_selection/randomized_lasso.py only. 3. They only permit using a weighted/adaptive regularization by applying the weights to the predictor matrix X. This will not always be equivalent to using a vector or matrix of penalties instead of the scalar in the l1 term.
But we know from Miki Elad's paper these are not equivalent problems. Former is weighted regularization in the analysis space and latter is weighted regularization in synthesis space.
The analysis version is also goes by Generalized Lasso
Our README is still correct that their graphical_lasso still does not support adaptivity, unless one gives their algorithm re-weighted columns of the data matrix X. The latter would be able to mimic adaptivity for nodewise/neighborhood selection (i.e. use lasso one node against all other variables) but isn't equivalent to the weighted graphical lasso formulation. |
Might want to up our documentation of differences between us and sklearn. Support for randomized lasso has been eliminated from scikit-learn. They think it is too unreliable and think that rescaling the design matrix is equivalent to putting adaptive penalties in the regularizer. But these are not equivalent operations. I suspect the difference is related to sparsity vs. co-sparsity. So this gives our implementation an advantage.
@mnarayan can you provide me with the changes you desire and I'll update
The text was updated successfully, but these errors were encountered: