-
Notifications
You must be signed in to change notification settings - Fork 82
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add accuracy binary classification objective #584
Conversation
Codecov Report
@@ Coverage Diff @@
## improved_objectives #584 +/- ##
====================================================
Coverage 98.95% 98.96%
====================================================
Files 132 133 +1
Lines 4504 4532 +28
====================================================
+ Hits 4457 4485 +28
Misses 47 47
Continue to review full report at Codecov.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good! Please add unit test coverage. Actually, looking now, I see on master we don't have direct coverage of our objectives from standard_metrics.py
. If that's correct, let's create a new unit test file or something and just cover this new metric, because it's important to have direct coverage for all our objectives.
Also let's freeze this until we've merged improved_objectives
to master, which I hope will be today!
@dsherry Yeah, the way that the objectives have been covered via tests thus far is in their usage in AutoML. We might have talked about this before but I'm not 100% sure what unit test coverage would look like that's helpful / not just "testing that sklearn is doing what it claims to be doing," with the objective function? |
Yeah, pretty much! We need coverage of each objective so that a) if we alter the implementation of an objective to do something more complicated than call sklearn, we know it conforms to the same standard as before, and b) in the unlikely but not impossible event that sklearn ships a bug, or our usage of their code somehow produces a bug, we catch it! A great example is #588 : we should have unit test coverage for precision and f1 which asserts that the edge case described there has the behavior we expect, and that the warning suppression works. |
Closing in favor of #624 |
Closes #294
(Updating base branch from #559)