Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add loss as an evaluation metric #34

Open
dan-bishopfox opened this issue Apr 16, 2019 · 3 comments
Open

Add loss as an evaluation metric #34

dan-bishopfox opened this issue Apr 16, 2019 · 3 comments
Labels
enhancement New feature or request

Comments

@dan-bishopfox
Copy link
Member

Return and print the average loss against the evaluation set.

@dan-bishopfox dan-bishopfox added the enhancement New feature or request label Apr 16, 2019
@dan-bishopfox
Copy link
Member Author

Perhaps even a loss distribution? Like, a loss histogram might be neat and helpful.

@the-bumble
Copy link
Contributor

Can you provide more information on this issue? We already compute the hamming_loss and report it as Overall Binary Accuracy.

Is loss defined as the set of disjointed elements between the ground truth and predictions? If so, wouldn't a histogram just be binary true or false for each element in the set?

@dan-bishopfox
Copy link
Member Author

It's binary crossentropy. I think this scikit function should be it?

https://scikit-learn.org/stable/modules/generated/sklearn.metrics.log_loss.html

The multi-label loss is a bit of a lesser-used scenario, so maybe it's not the right one.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants