Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

inconsistency results between your nnFormer and the original nnFormer repo #28

Open
Liiiii2101 opened this issue Nov 24, 2023 · 1 comment

Comments

@Liiiii2101
Copy link

Hi, Thanks for your excellent work. I have tried to run your models and also the other models you provided in this repo. But I found out there is a large inconsistency in the results between your models and the the original nnformer repo even with the same patch size, spacing and other same parameters. for you nnFormer, avg of 5-fold cross validation is around 0.5 DSC, but for original nnFormer it is 0.62. makes wonder did you somehow fine-tune only your medformer, and the results from other models are not fine-tuned?

Thanks a lot.

@yhygao
Copy link
Owner

yhygao commented Nov 28, 2023

The medformer in this repo is trained from scratch for all datasets without any pretraining weights. For nnFormer, I copied their original model code with very minor modifications to make it work in our repo. The performance difference between our repo and nnFormer repo might be because other training hyper-parameters, like lr, optimizer, epoch, etc. In my experience, nnFormer is sensitive to hyperparameters and needs special tuning in contrast to ResUNet or MedFormer. Some recent papers also have similar findings: https://arxiv.org/pdf/2304.03493.pdf. You might need to try other training hyper-parameters to see if they can match the performance in the original nnFormer repo.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants