Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use torch distributed #123

Open
wants to merge 2 commits into
base: pytorch-v1.1
Choose a base branch
from

Conversation

mucunwuxian
Copy link

First thanks a lot for your work!
I love the structure of HRNet. 👍✨

Now, I wondered how the GPU parallel method for learning and that for testing were different.
I also noticed that ISSUE has been talking about how to deal with when only one GPU can be used.
In addition, I can't use "nn.DataParallel" because I use RTX2080, so I'm glad that it is unified with "nn.parallel.DistributedDataParallel".

So I've listed this fix.
How about this?

Sincerely yours,
Mucun

@mucunwuxian mucunwuxian changed the base branch from master to pytorch-v1.1 April 12, 2020 07:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant