Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question] FT use 1.5 face a issue that tensor mismatch #1470

Open
hellangleZ opened this issue Apr 29, 2024 · 0 comments
Open

[Question] FT use 1.5 face a issue that tensor mismatch #1470

hellangleZ opened this issue Apr 29, 2024 · 0 comments

Comments

@hellangleZ
Copy link

Question

[2024-04-29 06:52:01,294] [INFO] [partition_parameters.py:345:exit] finished initializing model - num_params = 295, num_elems = 6.76B
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:00<00:00, 3.14it/s]
Some weights of LlavaLlamaForCausalLM were not initialized from the model checkpoint at /aml/llama2chat and are newly initialized: ['model.mm_projector.0.bias', 'model.mm_projector.0.weight', 'model.mm_projector.2.bias', 'model.mm_projector.2.weight']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
/aml/llava/lib/python3.10/site-packages/torch/_utils.py:831: UserWarning: TypedStorage is deprecated. It will be removed in the future and UntypedStorage will be the only storage class. This should only matter to you if you are using storages directly. To access UntypedStorage directly, use tensor.untyped_storage() instead of tensor.storage()
return self.fget.get(instance, owner)()
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████| 2/2 [00:03<00:00, 1.80s/it]
Traceback (most recent call last):
File "/aml/LLaVA-main/llava/train/train_mem.py", line 5, in
train(attn_implementation="flash_attention_2")
File "/aml/LLaVA-main/llava/train/train.py", line 827, in train
model = LlavaLlamaForCausalLM.from_pretrained(
File "/aml/llava/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3850, in from_pretrained
) = cls._load_pretrained_model(
File "/aml/llava/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4335, in _load_pretrained_model
raise RuntimeError(f"Error(s) in loading state_dict for {model.class.name}:\n\t{error_msg}")
RuntimeError: Error(s) in loading state_dict for LlavaLlamaForCausalLM:
size mismatch for model.embed_tokens.weight: copying a param with shape torch.Size([32000, 4096]) from checkpoint, the shape in current model is torch.Size([32001, 4096]).
size mismatch for lm_head.weight: copying a param with shape torch.Size([32000, 4096]) from checkpoint, the shape in current model is torch.Size([32001, 4096]).
You may consider adding ignore_mismatched_sizes=True in the model from_pretrained method.

[2024-04-29 06:52:06,327] [INFO] [launch.py:319:sigkill_handler] Killing subprocess 48707
[2024-04-29 06:52:06,327] [INFO] [launch.py:319:sigkill_handler] Killing subprocess 48708

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant