Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

使用新骨架进行重定向后,出现模型维度不匹配的问题 #222

Open
huang-1030 opened this issue Mar 30, 2023 · 1 comment

Comments

@huang-1030
Copy link

感谢分享您的工作!
在使用重定向的工程中,我使用了lafan1的骨架作为输入,但当运行demo.py后,出现如下模型加载的问题,请问这种情况我是需要使用该骨架重新训练吗?

Traceback (most recent call last):
File "eval_single_pair.py", line 97, in
main()
File "eval_single_pair.py", line 76, in main
model.load(epoch=20000)
File "E:\python\motion_editing\retargeting\models\architecture.py", line 274, in load
model.load(os.path.join(self.model_save_dir, 'topology{}'.format(i)), epoch)
File "E:\python\motion_editing\retargeting\models\integrated.py", line 82, in load
self.auto_encoder.load_state_dict(torch.load(os.path.join(path, 'auto_encoder.pt'), map_location=self.args.cuda_device), False)
File "E:\anaconda\lib\site-packages\torch\nn\modules\module.py", line 1483, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for AE:
size mismatch for enc.layers.0.0.mask: copying a param with shape torch.Size([184, 92, 15]) from checkpoint, the shape in current model is torch.Size([176, 88, 15]).
size mismatch for enc.layers.0.0.weight: copying a param with shape torch.Size([184, 92, 15]) from checkpoint, the shape in current model is torch.Size([176, 88, 15]).
size mismatch for enc.layers.0.0.bias: copying a param with shape torch.Size([184]) from checkpoint, the shape in current model is torch.Size([176]).
size mismatch for enc.layers.0.0.offset_enc.bias: copying a param with shape torch.Size([184]) from checkpoint, the shape in current model is torch.Size([176]).
size mismatch for enc.layers.0.0.offset_enc.weight: copying a param with shape torch.Size([184, 69]) from checkpoint, the shape in current model is torch.Size([176, 66]).
size mismatch for enc.layers.0.0.offset_enc.mask: copying a param with shape torch.Size([184, 69]) from checkpoint, the shape in current model is torch.Size([176, 66]).
size mismatch for enc.layers.0.1.weight: copying a param with shape torch.Size([96, 184]) from checkpoint, the shape in current model is torch.Size([96, 176]).
size mismatch for dec.layers.1.1.weight: copying a param with shape torch.Size([184, 96]) from checkpoint, the shape in current model is torch.Size([176, 96]).
size mismatch for dec.layers.1.2.mask: copying a param with shape torch.Size([92, 184, 15]) from checkpoint, the shape in current model is torch.Size([88, 176, 15]).
size mismatch for dec.layers.1.2.weight: copying a param with shape torch.Size([92, 184, 15]) from checkpoint, the shape in current model is torch.Size([88, 176, 15]).
size mismatch for dec.layers.1.2.bias: copying a param with shape torch.Size([92]) from checkpoint, the shape in current model is torch.Size([88]).
size mismatch for dec.layers.1.2.offset_enc.bias: copying a param with shape torch.Size([92]) from checkpoint, the shape in current model is torch.Size([88]).
size mismatch for dec.layers.1.2.offset_enc.weight: copying a param with shape torch.Size([92, 69]) from checkpoint, the shape in current model is torch.Size([88, 66]).
size mismatch for dec.layers.1.2.offset_enc.mask: copying a param with shape torch.Size([92, 69]) from checkpoint, the shape in current model is torch.Size([88, 66]).
size mismatch for dec.unpools.1.weight: copying a param with shape torch.Size([184, 96]) from checkpoint, the shape in current model is torch.Size([176, 96]).
size mismatch for dec.enc.layers.0.0.mask: copying a param with shape torch.Size([184, 92, 15]) from checkpoint, the shape in current model is torch.Size([176, 88, 15]).
size mismatch for dec.enc.layers.0.0.weight: copying a param with shape torch.Size([184, 92, 15]) from checkpoint, the shape in current model is torch.Size([176, 88, 15]).
size mismatch for dec.enc.layers.0.0.bias: copying a param with shape torch.Size([184]) from checkpoint, the shape in current model is torch.Size([176]).
size mismatch for dec.enc.layers.0.0.offset_enc.bias: copying a param with shape torch.Size([184]) from checkpoint, the shape in current model is torch.Size([176]).
size mismatch for dec.enc.layers.0.0.offset_enc.weight: copying a param with shape torch.Size([184, 69]) from checkpoint, the shape in current model is torch.Size([176, 66]).
size mismatch for dec.enc.layers.0.0.offset_enc.mask: copying a param with shape torch.Size([184, 69]) from checkpoint, the shape in current model is torch.Size([176, 66]).
size mismatch for dec.enc.layers.0.1.weight: copying a param with shape torch.Size([96, 184]) from checkpoint, the shape in current model is torch.Size([96, 176]).

@zzk88862
Copy link

感谢分享您的工作! 在使用重定向的工程中,我使用了lafan1的骨架作为输入,但当运行demo.py后,出现如下模型加载的问题,请问这种情况我是需要使用该骨架重新训练吗?

Traceback (most recent call last): File "eval_single_pair.py", line 97, in main() File "eval_single_pair.py", line 76, in main model.load(epoch=20000) File "E:\python\motion_editing\retargeting\models\architecture.py", line 274, in load model.load(os.path.join(self.model_save_dir, 'topology{}'.format(i)), epoch) File "E:\python\motion_editing\retargeting\models\integrated.py", line 82, in load self.auto_encoder.load_state_dict(torch.load(os.path.join(path, 'auto_encoder.pt'), map_location=self.args.cuda_device), False) File "E:\anaconda\lib\site-packages\torch\nn\modules\module.py", line 1483, in load_state_dict self.class.name, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for AE: size mismatch for enc.layers.0.0.mask: copying a param with shape torch.Size([184, 92, 15]) from checkpoint, the shape in current model is torch.Size([176, 88, 15]). size mismatch for enc.layers.0.0.weight: copying a param with shape torch.Size([184, 92, 15]) from checkpoint, the shape in current model is torch.Size([176, 88, 15]). size mismatch for enc.layers.0.0.bias: copying a param with shape torch.Size([184]) from checkpoint, the shape in current model is torch.Size([176]). size mismatch for enc.layers.0.0.offset_enc.bias: copying a param with shape torch.Size([184]) from checkpoint, the shape in current model is torch.Size([176]). size mismatch for enc.layers.0.0.offset_enc.weight: copying a param with shape torch.Size([184, 69]) from checkpoint, the shape in current model is torch.Size([176, 66]). size mismatch for enc.layers.0.0.offset_enc.mask: copying a param with shape torch.Size([184, 69]) from checkpoint, the shape in current model is torch.Size([176, 66]). size mismatch for enc.layers.0.1.weight: copying a param with shape torch.Size([96, 184]) from checkpoint, the shape in current model is torch.Size([96, 176]). size mismatch for dec.layers.1.1.weight: copying a param with shape torch.Size([184, 96]) from checkpoint, the shape in current model is torch.Size([176, 96]). size mismatch for dec.layers.1.2.mask: copying a param with shape torch.Size([92, 184, 15]) from checkpoint, the shape in current model is torch.Size([88, 176, 15]). size mismatch for dec.layers.1.2.weight: copying a param with shape torch.Size([92, 184, 15]) from checkpoint, the shape in current model is torch.Size([88, 176, 15]). size mismatch for dec.layers.1.2.bias: copying a param with shape torch.Size([92]) from checkpoint, the shape in current model is torch.Size([88]). size mismatch for dec.layers.1.2.offset_enc.bias: copying a param with shape torch.Size([92]) from checkpoint, the shape in current model is torch.Size([88]). size mismatch for dec.layers.1.2.offset_enc.weight: copying a param with shape torch.Size([92, 69]) from checkpoint, the shape in current model is torch.Size([88, 66]). size mismatch for dec.layers.1.2.offset_enc.mask: copying a param with shape torch.Size([92, 69]) from checkpoint, the shape in current model is torch.Size([88, 66]). size mismatch for dec.unpools.1.weight: copying a param with shape torch.Size([184, 96]) from checkpoint, the shape in current model is torch.Size([176, 96]). size mismatch for dec.enc.layers.0.0.mask: copying a param with shape torch.Size([184, 92, 15]) from checkpoint, the shape in current model is torch.Size([176, 88, 15]). size mismatch for dec.enc.layers.0.0.weight: copying a param with shape torch.Size([184, 92, 15]) from checkpoint, the shape in current model is torch.Size([176, 88, 15]). size mismatch for dec.enc.layers.0.0.bias: copying a param with shape torch.Size([184]) from checkpoint, the shape in current model is torch.Size([176]). size mismatch for dec.enc.layers.0.0.offset_enc.bias: copying a param with shape torch.Size([184]) from checkpoint, the shape in current model is torch.Size([176]). size mismatch for dec.enc.layers.0.0.offset_enc.weight: copying a param with shape torch.Size([184, 69]) from checkpoint, the shape in current model is torch.Size([176, 66]). size mismatch for dec.enc.layers.0.0.offset_enc.mask: copying a param with shape torch.Size([184, 69]) from checkpoint, the shape in current model is torch.Size([176, 66]). size mismatch for dec.enc.layers.0.1.weight: copying a param with shape torch.Size([96, 184]) from checkpoint, the shape in current model is torch.Size([96, 176]).

我也遇到相同的问题,请问你解决了吗

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants