Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

size mismatch for seq2seq.encoder.embed_tokens.weight #208

Open
CathyW77 opened this issue Aug 20, 2020 · 2 comments
Open

size mismatch for seq2seq.encoder.embed_tokens.weight #208

CathyW77 opened this issue Aug 20, 2020 · 2 comments

Comments

@CathyW77
Copy link

Hi, when I run
python synthesis.py --preset=presets/20180505_deepvoice3_ljspeech.json pretrained_models/20180505_deepvoice3_checkpoint_step000640000.pth sentences.txt pretrained_output

error:
Command line args:
{'--checkpoint-postnet': None,
'--checkpoint-seq2seq': None,
'--file-name-suffix': '',
'--help': False,
'--hparams': '',
'--max-decoder-steps': '500',
'--output-html': False,
'--preset': 'presets/deepvoice3_vctk.json',
'--replace_pronunciation_prob': '0.0',
'--speaker_id': None,
'': 'pretrained_models/20171222_deepvoice3_vctk108_checkpoint_step000300000.pth',
'<dst_dir>': './pretrained_output',
'<text_list_file>': './sentences.txt'}
Traceback (most recent call last):
File "synthesis.py", line 130, in
model.load_state_dict(checkpoint["state_dict"])
File "/nfs/private/yanglu/zhrtvc/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 769, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for MultiSpeakerTTSModel:
size mismatch for seq2seq.encoder.embed_tokens.weight: copying a param with shape torch.Size([149, 256]) from checkpoint, the shape in current model is torch.Size([159, 256]).

it also comes out when I run :
python synthesis.py --preset=presets/deepvoice3_vctk.json pretrained_models/20171222_deepvoice3_vctk108_checkpoint_step000300000.pth sentences.txt pretrained_output
Command line args:
{'--checkpoint-postnet': None,
'--checkpoint-seq2seq': None,
'--file-name-suffix': '',
'--help': False,
'--hparams': '',
'--max-decoder-steps': '500',
'--output-html': False,
'--preset': 'presets/deepvoice3_vctk.json',
'--replace_pronunciation_prob': '0.0',
'--speaker_id': None,
'': 'pretrained_models/20171222_deepvoice3_vctk108_checkpoint_step000300000.pth',
'<dst_dir>': 'pretrained_output',
'<text_list_file>': 'sentences.txt'}
Traceback (most recent call last):
File "synthesis.py", line 130, in
model.load_state_dict(checkpoint["state_dict"])
File "/nfs/private/yanglu/zhrtvc/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 769, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for MultiSpeakerTTSModel:
size mismatch for seq2seq.encoder.embed_tokens.weight: copying a param with shape torch.Size([149, 256]) from checkpoint, the shape in current model is torch.Size([159, 256]).

@ymzlygw
Copy link

ymzlygw commented Aug 20, 2020

Hi, when I run
python synthesis.py --preset=presets/20180505_deepvoice3_ljspeech.json pretrained_models/20180505_deepvoice3_checkpoint_step000640000.pth sentences.txt pretrained_output

error:
Command line args:
{'--checkpoint-postnet': None,
'--checkpoint-seq2seq': None,
'--file-name-suffix': '',
'--help': False,
'--hparams': '',
'--max-decoder-steps': '500',
'--output-html': False,
'--preset': 'presets/deepvoice3_vctk.json',
'--replace_pronunciation_prob': '0.0',
'--speaker_id': None,
'': 'pretrained_models/20171222_deepvoice3_vctk108_checkpoint_step000300000.pth',
'<dst_dir>': './pretrained_output',
'<text_list_file>': './sentences.txt'}
Traceback (most recent call last):
File "synthesis.py", line 130, in
model.load_state_dict(checkpoint["state_dict"])
File "/nfs/private/yanglu/zhrtvc/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 769, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for MultiSpeakerTTSModel:
size mismatch for seq2seq.encoder.embed_tokens.weight: copying a param with shape torch.Size([149, 256]) from checkpoint, the shape in current model is torch.Size([159, 256]).

it also comes out when I run :
python synthesis.py --preset=presets/deepvoice3_vctk.json pretrained_models/20171222_deepvoice3_vctk108_checkpoint_step000300000.pth sentences.txt pretrained_output
Command line args:
{'--checkpoint-postnet': None,
'--checkpoint-seq2seq': None,
'--file-name-suffix': '',
'--help': False,
'--hparams': '',
'--max-decoder-steps': '500',
'--output-html': False,
'--preset': 'presets/deepvoice3_vctk.json',
'--replace_pronunciation_prob': '0.0',
'--speaker_id': None,
'': 'pretrained_models/20171222_deepvoice3_vctk108_checkpoint_step000300000.pth',
'<dst_dir>': 'pretrained_output',
'<text_list_file>': 'sentences.txt'}
Traceback (most recent call last):
File "synthesis.py", line 130, in
model.load_state_dict(checkpoint["state_dict"])
File "/nfs/private/yanglu/zhrtvc/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 769, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for MultiSpeakerTTSModel:
size mismatch for seq2seq.encoder.embed_tokens.weight: copying a param with shape torch.Size([149, 256]) from checkpoint, the shape in current model is torch.Size([159, 256]).

Hey, try don't use --preset params or change the correspond params in preset.

@CathyW77
Copy link
Author

@ymzlygw Hi, I have tried
python synthesis.py pretrained_models/20180505_deepvoice3_checkpoint_step000640000.pth sentences.txt pretrained_output

still went error
File "synthesis.py", line 130, in
model.load_state_dict(checkpoint["state_dict"])
File "/nfs/private/yanglu/zhrtvc/venv/lib/python3.6/site-packages/torch/nn/modules/module.py", line 769, in load_state_dict
self.class.name, "\n\t".join(error_msgs)))
RuntimeError: Error(s) in loading state_dict for MultiSpeakerTTSModel:
size mismatch for seq2seq.encoder.embed_tokens.weight: copying a param with shape torch.Size([149, 128]) from checkpoint, the shape in current model is torch.Size([159, 128]).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants