Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Issue on page /20-Plug-and-Play-Estimators.html #350

Open
SebKrantz opened this issue Sep 21, 2023 · 0 comments
Open

Issue on page /20-Plug-and-Play-Estimators.html #350

SebKrantz opened this issue Sep 21, 2023 · 0 comments

Comments

@SebKrantz
Copy link

SebKrantz commented Sep 21, 2023

Thanks for creating a great resource! The suggested pseudo outcome for the continuous treatment case is

y* = (y-y_pred)*(t-t_pred) / (t-t_pred)^2 = (y-y_pred) / (t-t_pred)

So it is the same as the R-learner but without weights. I would like to understand why this last simplification step is not made in the book, but instead you use (y-y_pred)*(t-t_pred) to only get a sign effect.

I am also wondering about the following: weighted linear regression is done by multiplying both Xand y with ws = sqrt(w) where w are the weights, i.e. Xw = X * ws, yw = y * ws such that beta = inv(Xw'Xw)*Xw'yw gives the weighted OLS coefficients. Thus with a linear final stage, the R-learner simply uses y-y_pred as outcome, and Xw as predictor matrix. I am wondering if this would not be a better approach also for non-linear final stages that don't support sampling weights i.e. using Xw to predict y-y_pred?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant