Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Validation Loss did not decrease in the HMDB51 notebook? #25

Open
neonampv opened this issue Nov 26, 2021 · 9 comments
Open

Validation Loss did not decrease in the HMDB51 notebook? #25

neonampv opened this issue Nov 26, 2021 · 9 comments

Comments

@neonampv
Copy link

I trained your HMDB51 notebook in 10 epochs but the validation loss did not decrease? Why did it happened?

@Atze00
Copy link
Owner

Atze00 commented Nov 27, 2021

Can you provide more information? Maybe a log of the training.
It's probably due to the fact that it converges from the first epoch

@nguyenquibk1996
Copy link

I had the same problem.
When I trained Movinet with my own dataset. The training loss decreased but the validation loss increased from the first epoch.
( see the below image )
If I trained Movinet not using pretrained Kinetics with HMDB51 in the notebook sample and my own dataset (i did not save a log of the training), both losses had not decreased.
Can you explain it? Thank you
movinet_val_loss

@Atze00
Copy link
Owner

Atze00 commented Dec 3, 2021

The notebook functions correctly, also I use the networks daily. It's unlikely these are problem related to part of the code of this repository. If you provide a colab short script that reproduces the problem I will look at it.

@nguyenquibk1996
Copy link

nguyenquibk1996 commented Dec 3, 2021

I will run your notebook with HMDB51 for 10 epochs and show to you a log of the training. Because I used the same function for my own dataset and got the same problem. I don't think it can converges from the first epoch with many datasets. When you train Movinet with your dataset, the validation loss decreases or not?
Thank you

@poincarelee
Copy link

I trained HMDB51 dataset for 20 epochs with modelA0_stream_statedict_v3, the result is as follows:
image

@poincarelee
Copy link

poincarelee commented Oct 26, 2022

@nguyenquibk1996
Hi, did you solve the problem? I met the same problem in my own dataset. Then I tried to train hmdb51 without pretrained, the evaluation accuracy is as follows:
image

Did I miss any key points during finetuning or could you give any clues about this?

@haowei2020
Copy link

@nguyenquibk1996 Hi, did you solve the problem? I met the same problem in my own dataset. Then I tried to train hmdb51 without pretrained, the evaluation accuracy is as follows: image

Did I miss any key points during finetuning or could you give any clues about this?

I think the dataset is primary cause, and the data processing method(one clip or multiple clips sampled from one video) is second cause. I have train the X3D and SlowFast on the HMDB51 by mmaction2(default config samples one clip from one video), top1 acc is also about 30%, and validation loss can decrease.

@poincarelee
Copy link

Have you tried any other datasets? BTW, I see in mmaction2 there's no training codes of X3D, could you tell me how to get the codes?

@haowei2020
Copy link

Have you tried any other datasets? BTW, I see in mmaction2 there's no training codes of X3D, could you tell me how to get the codes?

I have tried UCF101, the top1 can over 45%. I write the codes of X3D by myself(of course refer to existing codes )

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants