Replies: 3 comments 1 reply
-
I think this is pretty normal to catch warning using a context manager if you expect the warning to be raised and that you are fine with it.
Another solution is to increase the |
Beta Was this translation helpful? Give feedback.
-
Except that if it's inside something else, say a pipeline or votingclassifier, I have to hide the warning across the whole group and can't control via params. or do something rather complex and strange. all I'm doing is early stopping to avoid overfitting. i looked at tol before posting the above but "convergence is considered to be reached and training stop" is something i really don't want. Just want it go to default max_iter and be happy if it didn't converge. Tuning stuff like this is waay over fitting on small datasets and requires extensive and often unreliable CV to validate. and please don't say it's not meant for small datasets, it often ensembles well with gbdts :)
Also, while we are at it, SequentialFeatureSelector really should have a logging/output option. I think an entire framework (mlxtend) was built because one missing verbose=1 flag. |
Beta Was this translation helpful? Give feedback.
-
I have now filed #27561 for this. |
Beta Was this translation helpful? Give feedback.
-
I find overfitting on smaller datasets to be quite easy, and when I increase max_iter to avoid convergence warnings I invariably end up overfitting.
The warning doesn't make sense in some cases and it seems weird I have to disable it via global python properties.
Is there a reason why a flag to turn this off wasn't added to MLPClassifier?
Beta Was this translation helpful? Give feedback.
All reactions