Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Something wrong when training with voxelnet_late_fusion.yaml and second_late_fusion.yaml? #130

Open
lubin202209 opened this issue Apr 7, 2024 · 0 comments

Comments

@lubin202209
Copy link

Hello, I encountered with some strange problems when traing with voxelnet_late_fusion.yaml and second_late_fusion.yaml,I found that when train with batch_size of 1, everything works fine for both training and inferencing. However, when I use a batch_size bigger than 1 (for example, 2) for training, I get the following error when calculating the loss on the validation set:

d40f89e98bf4f8f33045efa7bf1bda7

As you can seen, I printed ouput_dict['psm'].shape,ouput_dict['rm'].shape,batch_data['ego']['label_dict']['targets'].shape,batch_data['ego']['label_dict']['pos_equal_one'].shape

d0e0ef17db3ba4db477ec489370ae3e

I found that the output of the model is based on batch_size as 2, but the true value in batch_data is 1,It seems that there is a mismatch in the data loading part of LatefusionDataset. Can you help me solve this problem? How can I fix it?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant