New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes proposed in #885. Don't register handlers by default. #889
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me, but I think @mitar should have a look at this, too. Also, I don't understand what the code for storing something in the config is supposed to do?
Nothing right now (it's not implemented). But the idea is that a user can adjust their log settings (or perhaps all settings, e.g. apikey) programmatically, and persist those settings in the configuration file. This way a user can make permanent configuration changes without having to manually find/edit the configuration file. |
Hm, but wouldn't a user put the configuration of logging in the startup of an application, and want it to be different for different applications using OpenML? Basically, I don't think this is something we should handle in OpenML. |
Then what is the point of including verbosity options in the configuration file? |
Good point. We decided upon that once upon a time, but I'm not convinced that that was a good idea. |
I think config file could control only when running inside a interactive interpreter, or Jupyter, or something like that. Something like this could work? But yes, I am also not convinced that having that in a configuration is reasonable. Because even if you are using interactive interpreter, it might be that openml library is used as a 3rd party to some other library and you are not really expecting to get messages from it. |
file_output_level: Optional[int] = None, | ||
save_as_default: bool = False | ||
): | ||
""" Set file output to the desired level and register it with openml logger if needed. """ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think "register it with openml logger" is implementation detail. Maybe just saying that "configured logging to a file (if not already) and sets the desired level".
Based on the feedback I would consider finishing the PR with no longer supporting the log level properties of the openml configuration file. This means that each time the user wants to configure openml logging, it either does so through conventional Python usage or by calling one of the helper functions in config. |
Fine by me, maybe we should drop this from the OpenML cache format. @janvanrijn @giuseppec are you using the fact that there's a verbosity flag in the OpenML config file in the Java and R toolkit. |
Codecov Report
@@ Coverage Diff @@
## develop #889 +/- ##
===========================================
- Coverage 88.5% 88.17% -0.34%
===========================================
Files 37 37
Lines 4324 4337 +13
===========================================
- Hits 3827 3824 -3
- Misses 497 513 +16
Continue to review full report at Codecov.
|
@mfeurer do you know what is going on with these errors? It seems these errors stem from the |
I'm thinking it is due to an expired ssl certificate (https://test.openml.org/ also gives off a warning). I'll ask Joaquin to update the certificate. |
There seems to be an issue with the scikit-learn downloaded in the Appveyor CI (though it reports it tried to install scikit-learn 0.20).
|
Hm, would increasing the version help here? |
Use Let's Encrypt and stop worrying about SSL. :-) |
For Appveyor, scikit-learn now installed through pip instead of conda, and that seems to avoid the issues. Also bumped the scikit-learn version installed with Appveyor to 0.21 (no particular reason as that is still left over from trying to see if changing the version avoided the issue). If you want me to revert it to 0.20, or bump it to 0.22 instead I can do that too. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm fine with this. @mitar do you have any further comments?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good to me! Thanks!
Fixes #885.
First implementation of changes suggested in #885:
A user can now specify the console and file log levels through
set_console_log_level
andset_file_log_level
respectively. As input they expect the log level (both OpenML's 0/1/2 and Python log levels are accepted), and a boolsave_as_default
. Actually saving to the config file is right now not supported (and will raise aNotImplementedError
). Calling these functions registers the respective handler if it wasn't yet, set/change it to the appropriate value, and ensures the openml logger is set to the right log level.Consider this code:
The output we get, uncommenting respective lines for scenario '2' and '3':
1:
Flow 'weka.LinearRegression's empty description
to console only. This is default behavior for warning level messages. The debug message from the last line is ignored.2:
to console only.
3 (the line specifying console output was commented out again):
written to file, no console output. Note that the warning message is now also not printed to console.
This does not include changes to where the config/cache directories are located.
Writing to the config file is not yet supported. I noticed we hack our way around the configparser module. If we want to support writing to the log file from the python module (which I'm for), we need to look at other options or implement our own (our log file definition is very simple). In fact,
I would further propose refactoring the openml.config module into smaller submodules as a lot of different things are going on within the same module now (loading, logging, test configuration).
I'm willing to do the above refactoring, but it might take a few weeks (I'm very busy until the holidays).