Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About accuracy of validation phase. #4

Open
ing907 opened this issue Aug 3, 2023 · 0 comments
Open

About accuracy of validation phase. #4

ing907 opened this issue Aug 3, 2023 · 0 comments

Comments

@ing907
Copy link

ing907 commented Aug 3, 2023

Hello, thank you for nice work and open-sourcing your project.

Recently, I attempted to train the C2C model using my own dataset and monitored its accuracy through tensorboard. The model appeared to train properly; however, I encountered an issue with the validation accuracy. During the validation phase, the model always outputs 'positive' regardless of the input data.

Upon further investigation, I noticed a significant disparity in the output distribution of the ResNet backbone during the validation phase compared to the training phase. By enabling the train mode using model.train() in the evaluation code, the model's outputs seem to be correct once again.

I have a few questions regarding this phenomenon:

  1. Are you aware of the possible reasons behind this occurrence and any potential solutions to address it?
  2. If there is no direct resolution to this issue, would it be acceptable to measure the accuracy of C2C using model.train()?
  3. Have you encountered a similar phenomenon in your experimental environment?

Although I have not yet tried the CAMELYON16 dataset, I would like to gain a clear understanding of the exact environment before conducting any experiments.

Thank you for your time and consideration.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant