Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enhancement: rewriting the mutual information as in BALD [Houlsby et al 2011] #2

Open
thangbui opened this issue Jul 3, 2016 · 1 comment

Comments

@thangbui
Copy link

thangbui commented Jul 3, 2016

[In discussion with Jose Miguel Hernandez-Lobato @jmhernandezlobato and Daniel Hernandez-Lobato @danielhernandezlobato]

The current exploration objective used in the paper is a sum of expected reductions in entropy of the parameters of the dynamic, that is, each reduction is the difference between the entropy at the current time step and the entropy at the next time step, averaging over all possible next states.

I think the current objective could be simplified further by switching the next state and parameters as described in Houlsby et al., 2011 and Hernandez-Lobato and Adams, 2015 (see the active learning experiment).

This equivalent objective is easier for regression models (e.g. neural network regression like you are using here) because the term inside the expectation is now the entropy of the likelihood model, which is constant for the regression case. The difficult term left is the entropy of the predictive distribution, i.e. in the Gaussian prediction case, maximising this is equivalent to finding actions that will result in highest predictive variance. This can be computed for BNNs using Gaussian approximation to the predictive distribution or by Monte Carlo.

Would this change be easily incorporated into the current code?

@reinhouthooft
Copy link
Contributor

BALD could be an interesting way to calculate surprise, however, it seems that we then rely on having accurate dynamics uncertainty estimates, whereas now we model the adaption of the model to the environment itself. It should be quite easy to incorporate, definitely worth investigating!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants