Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sharing the precise setup for your experiments #14

Closed
Zhack47 opened this issue Jan 19, 2024 · 1 comment
Closed

Sharing the precise setup for your experiments #14

Zhack47 opened this issue Jan 19, 2024 · 1 comment

Comments

@Zhack47
Copy link

Zhack47 commented Jan 19, 2024

Hello @saikat-roy, thank you for open sourcing this work!

I am currently trying to reproduce it on my side, however, using the same conditions as in your supplementary material (i.e.: batch size= 2, patch 128x128x128 and 250 it/epochs wit the Small (S) model and k5 kernel, i get a 200 sec/ epoch runtime, different from your 117 secs.

I have eliminated the other options, so I am now left with setup differences.
Could you give us the setup you used ( GPU, pytorch version..) to train your model ?

Regards

Zach

@saikat-roy
Copy link
Member

Hey @Zhack47. My first thought is that maybe I used a Tesla A100 40GB for baselining the S model without any checkpointing, and that your GPU is different. If that doesn't help, I can share my remaining setup.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants