Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add binary classification accuracy objective #624

Merged
merged 5 commits into from
Apr 13, 2020

Conversation

angela97lin
Copy link
Contributor

Closes #294

Continuation of #584, but using master as the new base (switching in #584 caused a lot of weird merge issues?)

@codecov
Copy link

codecov bot commented Apr 13, 2020

Codecov Report

Merging #624 into master will increase coverage by 0.00%.
The diff coverage is 100.00%.

Impacted file tree graph

@@           Coverage Diff           @@
##           master     #624   +/-   ##
=======================================
  Coverage   98.95%   98.96%           
=======================================
  Files         132      133    +1     
  Lines        4504     4534   +30     
=======================================
+ Hits         4457     4487   +30     
  Misses         47       47           
Impacted Files Coverage Δ
evalml/objectives/__init__.py 100.00% <ø> (ø)
evalml/objectives/utils.py 94.44% <ø> (ø)
evalml/exceptions/__init__.py 100.00% <100.00%> (ø)
evalml/exceptions/exceptions.py 100.00% <100.00%> (ø)
evalml/objectives/standard_metrics.py 99.51% <100.00%> (+0.02%) ⬆️
evalml/tests/objective_tests/test_objectives.py 96.29% <100.00%> (ø)
...lml/tests/objective_tests/test_standard_metrics.py 100.00% <100.00%> (ø)

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update f1cd43a...b9b4e9a. Read the comment docs.

raise ValueError("Length of inputs is 0")
if len(y_predicted) != len(y_true):
raise DimensionMismatchError("Inputs have mismatched dimensions: y_predicted has shape {}, y_true has shape {}".format(len(y_predicted), len(y_true)))
return metrics.accuracy_score(y_true, y_predicted)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, this is great. I wonder if there's a good way we can have this sort of validation occur in all objectives. But not necessary for this PR

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yup, I filed #619 for that :D We can refactor any test code that happens then?

evalml/objectives/utils.py Outdated Show resolved Hide resolved
Copy link
Contributor

@dsherry dsherry left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left a couple comments but otherwise looks great!

@angela97lin angela97lin merged commit 5e7dd14 into master Apr 13, 2020
@angela97lin angela97lin deleted the 294_add_accuracy_objective branch April 17, 2020 18:42
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add a classification accuracy objective
2 participants