Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: 'DataParallelWithCallback' object has no attribute 'netEMA' #26

Open
Ha0Tang opened this issue May 13, 2022 · 3 comments

Comments

@Ha0Tang
Copy link

Ha0Tang commented May 13, 2022

No description provided.

@edgarschnfld
Copy link
Contributor

Hi,

The problem was caused by the last commit in the updateEMA function:
39fe99b
(using model.netEMA instead of model.module.netEMA).

I fixed it now with a new commit.

@Ha0Tang
Copy link
Author

Ha0Tang commented May 14, 2022

Thanks.

BTW, I cannot find the num_workers parameter in the dataloader.

@edgarschnfld
Copy link
Contributor

The num_workers can be used as argument in the following function:

dataloader_train = torch.utils.data.DataLoader(dataset_train, batch_size = opt.batch_size, shuffle = True, drop_last=True)

To make it easier for the user, I now added an optional --num_workers flag in case you don't want to use the default value. For example, using 4 workers looks like this:

python train.py --name oasis_ade20k --dataset_mode ade20k --gpu_ids 0,1 \
--dataroot /some/path/ADEChallengeData2016 --batch_size 32 --num_workers 4

This will automatically set the num_workers for the dataloader function shown above. It's now pushed to master.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants