Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding different loss to tf.estimator.Head #149

Open
le-dawg opened this issue Apr 14, 2020 · 2 comments
Open

Adding different loss to tf.estimator.Head #149

le-dawg opened this issue Apr 14, 2020 · 2 comments
Labels
question Further information is requested

Comments

@le-dawg
Copy link

le-dawg commented Apr 14, 2020

Hi all,

I seek to optimize an ensemble for binary classification. My established baseline uses the binary_crossentropy loss provided by keras. Using the same notation yields unsupported callable, because it seems the base_head.Head()does not tie into the default tf implementation.

What can I do train with the binary crossetropy?

@cweill
Copy link
Contributor

cweill commented Apr 17, 2020

Have you tried tf.estimator.BinaryClassHead()? I believe it uses the same loss under the hood, specifically sigmoid_cross_entropy.

@cweill cweill added the question Further information is requested label Apr 17, 2020
@le-dawg
Copy link
Author

le-dawg commented Apr 19, 2020

Well, I figured that out in the meantime.
But a problem persists:
BinaryClassHead It uses the correct loss but when I predict using a simple_dnn AdaNet of three iterations via estimator.predict() the network always predicts with class 0.

The same head works on the canned tf.estimator.LinearClassifier which I would suspect to be a problem regarding an incorrect loss function. I can't troubleshoot the AdaNet estimator any deeper than this. I will take any help I can!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants