Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How do weak-learners combine? #135

Open
jeffltc opened this issue Oct 27, 2019 · 2 comments
Open

How do weak-learners combine? #135

jeffltc opened this issue Oct 27, 2019 · 2 comments
Assignees
Labels
question Further information is requested

Comments

@jeffltc
Copy link

jeffltc commented Oct 27, 2019

  • If I use adanet.AutoEnsembleEstimator to combine weak-learners, do those weak-learners combine side by side on the same layer with weights or one after another on multiple layers?

  • I’ve also read Combination of subnetworks #24. Does that mean the side by side combination is a default and if I want to make one after another structure, I need to use custom generator and Subnetwork.shared to pass the tensor to next iteration?

  • Any update on displaying the detailed archetecture of the final result? (I’ve tried using method mentioned in [Question] How to retrieve the detailed ensemble architecture? #29 to find out the detailed structure in TensorBoard but as you mentioned, the result is less than ideal. I can only know which weak-learner is in the final model structure in TEXT tab while the graph in GRAPH tab is a little bit too detailed which made me lost in the nest of ops. And the method mentioned in [Question] How to retrieve the detailed ensemble architecture? #29 seemed not compatible with tf 2.0. I'm still working on it.)

@cweill cweill self-assigned this Oct 28, 2019
@cweill cweill added the question Further information is requested label Oct 28, 2019
@cweill
Copy link
Contributor

cweill commented Oct 28, 2019

If I use adanet.AutoEnsembleEstimator to combine weak-learners, do those weak-learners combine side by side on the same layer with weights or one after another on multiple layers?

AutoEnsembleEstimator use the ensemblers parameter to determine how to ensemble weak learners. By default it uses with adanet.ensemble.ComplexityRegularizedEnsembler which is pretty simple: it just combines the weighted logits of the subnetworks. However, you can subclass adanet.ensemble.Ensembler to do more advanced things if you'd prefer.

I’ve also read #24. Does that mean the side by side combination is a default and if I want to make one after another structure, I need to use custom generator and Subnetwork.shared to pass the tensor to next iteration?

That's correct. Doing so allows you to create more expressive and adaptive search spaces than if you used AutoEnsembleEstimator.

Any update on displaying the detailed archetecture of the final result? (I’ve tried using method mentioned in #29 to find out the detailed structure in TensorBoard but as you mentioned, the result is less than ideal. I can only know which weak-learner is in the final model structure in TEXT tab while the graph in GRAPH tab is a little bit too detailed which made me lost in the nest of ops. And the method mentioned in #29 seemed not compatible with tf 2.0. I'm still working on it.)

Currently it's tough, because we support arbitrary TF 1.0 graphs. So the only source of truth for that kind of connectivity is the TensorBoard "Graph" tab, which as you saw, can be quite noisy and tough to interpret.

That being said, with TF 2.0, we are moving towards a Keras-first world when weak learners will be Keras Models composed of Keras Layers. This should result in us being able to extract a DAG of keras layers that is more human-readable. But that is still a ways away. :)

@le-dawg
Copy link

le-dawg commented Apr 7, 2020

Any update on displaying the detailed archetecture of the final result? (I’ve tried using method mentioned in #29 to find out the detailed structure in TensorBoard but as you mentioned, the result is less than ideal. I can only know which weak-learner is in the final model structure in TEXT tab while the graph in GRAPH tab is a little bit too detailed which made me lost in the nest of ops. And the method mentioned in #29 seemed not compatible with tf 2.0. I'm still working on it.)

Currently it's tough, because we support arbitrary TF 1.0 graphs. So the only source of truth for that kind of connectivity is the TensorBoard "Graph" tab, which as you saw, can be quite noisy and tough to interpret.

That being said, with TF 2.0, we are moving towards a Keras-first world when weak learners will be Keras Models composed of Keras Layers. This should result in us being able to extract a DAG of keras layers that is more human-readable. But that is still a ways away. :)

What is the current level of development on this? There is no way to use plot_model() on the subnetworks?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants