Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Over-penalization after 2022 March Update #26

Open
wysjdy0511 opened this issue Apr 22, 2022 · 2 comments
Open

Over-penalization after 2022 March Update #26

wysjdy0511 opened this issue Apr 22, 2022 · 2 comments

Comments

@wysjdy0511
Copy link

wysjdy0511 commented Apr 22, 2022

Hi Dr. Nicolson, thanks for the new upgrade.

Regarding my code below:

Model1<-constructModel(as.matrix(z),1,"Basic",gran=c(20,10),cv="Rolling")
ENET<-cv.BigVAR(Model1)

Before the 2022 March update, the same code works well for both Elastic Net and Lasso method, and both yields beta matrix of appropriate sparsity level. However, after the 2022 March update, I run the same code but yielded different results. While ENET estimate still works fine, Lasso estimate tend to over-penalize and get 99.999% of beta coefficients to be 0.

This over-penalization problem seems to also affect the 2 newly added methods in this upgrade, MCP and SCAD. Both of them also result in super sparse beta matrices.

I suspect you may have changed some codes about cv.BigVAR, particularly about some methods such as Lasso in the recent upgrade. I emailed you earlier with my data attached, just in case you may want to check it yourself.

Thanks a lot for your work!

@wbnicholson
Copy link
Owner

I slightly modified the construction of the penalty grid so it may require adjusting the granularity parameter to achieve a comparable level of sparsity.

I don't think I received your email. Could you send your data to wbn8@cornell.edu? I'll take a look at the specific issue.

@wysjdy0511
Copy link
Author

wysjdy0511 commented Apr 22, 2022

I slightly modified the construction of the penalty grid so it may require adjusting the granularity parameter to achieve a comparable level of sparsity.

I don't think I received your email. Could you send your data to wbn8@cornell.edu? I'll take a look at the specific issue.

Thanks for the timely response, Will. Yes, I tried to use multiple different granularity settings from (20,10), (50,10), to (150,10). It makes the results marginally better, but still super super sparse.

I just emailed you again, but no hurries about it. I really appreciate it whenever you have time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants