Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cannot Reproduce Cream-S #160

Open
mingkai-zheng opened this issue Apr 15, 2023 · 1 comment
Open

Cannot Reproduce Cream-S #160

mingkai-zheng opened this issue Apr 15, 2023 · 1 comment

Comments

@mingkai-zheng
Copy link

mingkai-zheng commented Apr 15, 2023

Hello, I'm attempting to reproduce the results of Cream-S, but my achieved accuracy of 77.04% falls short of the reported accuracy in the paper (77.6%). I have used the configuration file provided at https://github.com/microsoft/Cream/blob/main/Cream/experiments/configs/retrain/287.yaml, adjusting Net.SELECTION to 287, and utilizing 16 GPUs with a batch size of 128 per GPU, as the paper's specifications. However, I noticed that this configuration file employs RandAugment instead of AutoAugment (as mentioned in the paper), and also incorporates random erase augmentation, which was not discussed in the paper. This discrepancy is causing confusion. Could you please clarify the precise training strategy for Cream-S? Additionally, was the same training strategy applied to all architectures presented in the paper?

@mingkai-zheng
Copy link
Author

I also tried to retrain with only 8 GPUs with a batch size of 128 per GPU, which is exactly the same setting in your config file, the result is 77.07%, which is similar to the 16 GPUs setting.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant