Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use this code in pytorch1.0 #36

Open
mingrui-xie opened this issue Mar 5, 2019 · 2 comments
Open

How to use this code in pytorch1.0 #36

mingrui-xie opened this issue Mar 5, 2019 · 2 comments

Comments

@mingrui-xie
Copy link

Because the inplace_abn(the third party libs that this project used) have some bugs in 8 GPUs pytorch0.4, if we want to train in 8GPUs, we should update the newest bn in https://github.com/mapillary/inplace_abn.
But the newset bn now requires to use DistributedDataParallel instead of DataParallel. so, could you
please create a branch that use the newest bn in pytorch 1.0 ? or give me some advice that how to change this project to make it compatible with the newest bn.
Thank you very much!

@speedinghzl
Copy link
Owner

Good suggestion. But I'm afraid that I don't have the time to do this.
You can simply replace current inplace-ABN with the newest one. And add torch.distributed into train.py. Maybe you can get more information in Pytorch doc or some examples.

@speedinghzl speedinghzl mentioned this issue Mar 26, 2019
Closed
@Leodora
Copy link

Leodora commented Jun 27, 2019

I have tried the newest Inplace-abn in my model(pspnet), and you can follow the script in this page : https://oldpan.me/archives/pytorch-to-use-multiple-gpus . Meanwhile, make sure that you have followed the steps in Pytorch doc.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants