We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
As the title described, does standalone mode support multiple GPUs to speed up training?
The text was updated successfully, but these errors were encountered:
We didn't provide multiple GPUs in the standalone module. However, you can use the DP module of PyTorch in train function in SGDSerialClientTrainer.
train
Sorry, something went wrong.
We define the following variables to further illustrate the idea:
When K == N, each selected client is allocated to a GPU to train.
When K > N, multiple clients are allocated to a GPU, then they execute training sequentially in the GPU.
When K < N, you can adjust to use fewer GPUs in training.
We need to set the number of GPUs in gpu and specific distributed settings in the distributed configs.
gpu
distributed
The implementation is under working. Anybody would like to help?
No branches or pull requests
As the title described, does standalone mode support multiple GPUs to speed up training?
The text was updated successfully, but these errors were encountered: