Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question][MC dropout] drop hidden units for each data point or for each mini-batch data? #20

Open
alwaysuu opened this issue Oct 18, 2021 · 1 comment
Assignees
Labels
question Further information is requested

Comments

@alwaysuu
Copy link

Hi JavierAntoran,

Thanks for the wonderful code first, and it is really helpful for me working in the related area. I'd like to consult a question about the MC dropout. In BBB with local reparameterization, the activation values are sampled for each data point instead of directly sampling a weight distribution to reduce the computational complexity. So, in MC dropout, shall we do the similar procedure, e.g. dropout hidden units for each data point in training or testing phase? I notice that your MC dropout model seems uses the same dropout for a mini-batch data and the default batch size is 128. Should I change the batch size to 1 to achieve the goal of dropping hidden units for each data point?

Looking forward to your reply. Thanks a lot

@JavierAntoran
Copy link
Owner

Hi @alwaysuu,

Yes, changing the batch size to 1 will result in different weights being used for each input. However, it could make training very slow due to large variance in the estimate of the loss.

Javier

@JavierAntoran JavierAntoran added the question Further information is requested label Aug 16, 2022
@JavierAntoran JavierAntoran self-assigned this Aug 16, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants