Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About Mutil-Gpus for training? #26

Open
GreenLimeSia opened this issue Sep 18, 2021 · 7 comments
Open

About Mutil-Gpus for training? #26

GreenLimeSia opened this issue Sep 18, 2021 · 7 comments

Comments

@GreenLimeSia
Copy link

Hi, authors:

I found an issue when training the code with multi-GPUs, which is that only a single GPU is used despite inputting multiple GPUs. Can you solve this problem in your spare time?

Thanks!

@imlixinyang
Copy link
Owner

Hi! The code is expected to support multi-gpu training now with DataParallel.
Can you share which command you used?

@GreenLimeSia
Copy link
Author

I train the model with your code and your official command, i.e., python core/train.py --config configs/celeba-hq.yaml --gpus 0,1.
However, only one single GPU is used when training. The reason for this issue may be that Class "Gen" has no function of forward(self, ..). Can you help me?

@GreenLimeSia
Copy link
Author

A link for this issue is in here . Can you solve this problem in your spare time? @imlixinyang
Thanks!

@imlixinyang
Copy link
Owner

Try command "python core/train.py --config configs/celeba-hq.yaml --gpus 0 1".

@GreenLimeSia
Copy link
Author

I will try it now. Thanks for your reply. I am doing a new work based on your novel work. I will cite your work.
Thanks for your hard work.

@GreenLimeSia
Copy link
Author

It works now. Thanks again. @imlixinyang

@imlixinyang
Copy link
Owner

Glad to hear that! Gook luck for your research.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants