Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Patching on AutoMLSearch tests failing #1682

Closed
bchen1116 opened this issue Jan 12, 2021 · 4 comments
Closed

Patching on AutoMLSearch tests failing #1682

bchen1116 opened this issue Jan 12, 2021 · 4 comments
Labels
bug Issues tracking problems with existing features.

Comments

@bchen1116
Copy link
Contributor

Patching pipelines .score() and .fit() methods are causing the following messages in AutoMLSearch tests:
image

Repro
checkout main

pytest -s evalml/tests/automl_tests/test_automl.py::test_automl_rerun
@bchen1116 bchen1116 added the bug Issues tracking problems with existing features. label Jan 12, 2021
@freddyaboulton
Copy link
Contributor

@bchen1116 If you add a return_value to the score mock it should go away:

@patch('evalml.pipelines.BinaryClassificationPipeline.score', return_value={"Log Loss Binary": 0.2})

@bchen1116
Copy link
Contributor Author

@freddyaboulton is this something we need to add to all of the .score() mocks? And would this be the best solution for this bug?

@freddyaboulton
Copy link
Contributor

freddyaboulton commented Jan 12, 2021

I guess I hesitate to call this a bug because it doesn't change the behavior of automl in such a way that it causes the test to pass when it should fail or vice versa.

The underlying issue is that if you don't provide a return_value, a MagicMock is returned. Eventually we go to log the score with an f-string:

        logger.info(f"Best pipeline {self.objective.name}: {best_pipeline['score']:3f}")

Which uses the __format__ method for the MagicMock, which I guess is not defined.

Maybe if we change the way the logging is called, we can avoid the ugly log but I worry that would complicate the code for little benefit.

What do you think?

@bchen1116
Copy link
Contributor Author

Ah I see, that makes sense. I was unsure where that message was coming from. I think the behavior is fine then, since it doesn't seem to be detrimental to the behavior of AutoML. Thanks for the help, i'll close this out!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Issues tracking problems with existing features.
Projects
None yet
Development

No branches or pull requests

2 participants