Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expand set of evaluation metrics #21

Open
hhegre opened this issue Oct 21, 2022 · 0 comments
Open

Expand set of evaluation metrics #21

hhegre opened this issue Oct 21, 2022 · 0 comments
Assignees

Comments

@hhegre
Copy link
Collaborator

hhegre commented Oct 21, 2022

Write a function that returns and tabulates/visualizes an expanded set of standard evaluation metrics for the fatality predictions.

The candidates I am considering are:
RMSE based on log(Y+1)
RMSE based on Y
RMSE based on log(Y+c) where c is another constant than 1 (0.1, 0.01, ...)
RMSE for non-zero actuals - based on log(Y+1)
RMSE for zero actuals - based on log(Y+1)
Average precision for predictions collapsed into categories
Some calibration metrics from the prediction competition
Some metrics based on aggregate data (e.g. distance from global mean)

-- if anyone has other suggestions, please shout out. Also, I would like to get hold of some of the presentation code from the prediction competition.

@hhegre hhegre self-assigned this Oct 21, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant