You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
sklearn.linear_model has all the ingredients to implement the "pseudolikelihood" regression version of inverse covariance estimation that doesn't have the wrapper block coordinate descent. It is literally p uncoupled high dimensional regressions with some extra steps to symmetry and positive definiteness using cholesky decompositions.
It would be easy for us to implement a pseudolikelihood estimator that does this properly using sklearn.linear_model functions as the internal. This can serve as a comparison to loglikelihood type estimators that QUIC-dirty gives us for mixed norm penalties
sklearn.linear_model has all the ingredients to implement the "pseudolikelihood" regression version of inverse covariance estimation that doesn't have the wrapper block coordinate descent. It is literally p uncoupled high dimensional regressions with some extra steps to symmetry and positive definiteness using cholesky decompositions.
It would be easy for us to implement a pseudolikelihood estimator that does this properly using sklearn.linear_model functions as the internal. This can serve as a comparison to loglikelihood type estimators that QUIC-dirty gives us for mixed norm penalties
To be fleshed out more
References:
The text was updated successfully, but these errors were encountered: