Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

error batch norm #24

Open
babbu3682 opened this issue May 1, 2020 · 2 comments
Open

error batch norm #24

babbu3682 opened this issue May 1, 2020 · 2 comments

Comments

@babbu3682
Copy link

When I enter inputs to the CNN model, there is an error in the batch norm layer because the forward() make a batch size of 1. How did you solve this?

The error is as follows:
ValueError: Expected more than 1 value per channel when training, got input size torch.Size([1, 1024])

And when you do DataParallel, there is an error. How did you fix it?

@HHTseng
Copy link
Owner

HHTseng commented May 1, 2020

As far as I know it, this is probably because the samples you put may be too few. Just increase the sample size and this error should go away. This error is likely due to the design batchnorm of the PyTorch. Let me know if this error persists.

@babbu3682
Copy link
Author

Thanks!! It is solved. but I have another problem with the Encoder model. I wanted to make the CRNN from scratch so I build my custom CNN encoder model like Efficientnet-b0, it is worked but after 2 step the memory errors show up. I think gpu memory is stacked every step because of for(). So, how can you solve that problem?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants