Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Additions to default metrics? #489

Open
rawanmahdi opened this issue Jun 21, 2023 · 0 comments · May be fixed by #496
Open

Additions to default metrics? #489

rawanmahdi opened this issue Jun 21, 2023 · 0 comments · May be fixed by #496
Assignees
Labels
enhancement New feature or request

Comments

@rawanmahdi
Copy link

Feature request

It would be nice to be able to access a percision, recall, and F1 score as a default metric, or support a classification report output.

What is the expected behavior?
Compute percision, recall and F1 scores based on model predictions on the dataset, either during training or after training.

What is motivation or use case for adding/changing the behavior?
Working with imbalanced datasets, other metrics may conceal the true behaviour of the model. F1 scores tend to be more informative.

How should this be implemented in your opinion?
Similar to sklearn's classification report, implemented on testing data after training, or as a tracked metric during training.

Are you willing to work on this yourself?
yes

@rawanmahdi rawanmahdi added the enhancement New feature or request label Jun 21, 2023
@rawanmahdi rawanmahdi linked a pull request Jul 6, 2023 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants