Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Overlap assumption and Calibration test #1387

Open
FedericaPetru opened this issue Jan 31, 2024 · 1 comment
Open

Overlap assumption and Calibration test #1387

FedericaPetru opened this issue Jan 31, 2024 · 1 comment
Labels

Comments

@FedericaPetru
Copy link

Dear Team,

I am working with the grf package to study the effect of different shocks on consumption and I have two questions I wanted to ask you.

  1. Regarding the Overlap assumption, I understand that to avoid a deterministic decision of the treatment status, the estimated propensity scores should be close to one or zero.
    My question is: is there a specific accepted range? I am using the dummy variable drought as treatment and the range of the W.hat goes from 0.250 to 0.285 as in picture "W.hat.png". I was wondering whether this is too small and does not exclude the overlap or if you see any issue I am not considering.

  2. Regarding the Calibration Test, according to the rule: a coefficient of 1 for mean.forest.prediction suggests that the mean forest prediction is correct and that a coefficient of 1 for differential.forest.prediction additionally suggests that the heterogeneity estimates from the forest are well calibrated.
    My question is again related to the accepted range: how far from the ideal value of 1 is acceptable to still assert that the prediction is correct and that the forest are well calibrated. Is a result of 1.5 or 0.6 considered too far from the ideal benchmark?

Thank you in advance for your support.
Federica

W hat

@erikcs
Copy link
Member

erikcs commented Feb 21, 2024

Hi @FedericaPetru, 1) the function average_treatment_effect(forest) will give you a warning if overlap seems to be an issue. 2) The output gives you a standard error you could use to construct a confidence interval.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants