You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, looks like you're trying to merge two models with different vocabulary (and thus embedding) sizes. This can happen when you've added tokens like a padding token to the vocabulary. I would recommend merging before adding vocab items if possible. If you only have access to a llama model with added vocab items, just removed them from the model before merging (either by manually slicing the embedding matrix, or via the huggingface resize method (which I think shooouuuld work).
If you are using llama2 model and add pad_token, use this base_model.resize_token_embeddings(32002) for pretrained llama2 model, then load tokenizer of peft fine-tuned locally, it should work.
RuntimeError: The size of tensor a (32000) must match the size of tensor b (32002) at non-singleton dimension 0 如何解决?
The text was updated successfully, but these errors were encountered: