You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
My own task or dataset (give details below)
Reproduction
To reproduce:
>>>fromtransformersimportAutoTokenizer>>>tokenizer=AutoTokenizer.from_pretrained("t5-base")
>>>inputs=tokenizer("foo "*2000, return_tensors="pt")
Outputs`Token indices sequence length is longer than the specified maximum sequence length for this model (4001 > 512). Running this sequence through the model will result in indexing errors`
Thanks a lot for the issue @marksverdhei . You're right T5 has no fixed max length - so this warning is confusing.
The reason why lots of people associate T5 with a max length of 512 was that it was pretrained on a max length of 512, but is not limited to this length!
It has shown to generalize well to longer sequences. Also see: #5204
I think it is a bit confusing. As in the paper, "We use a maximum sequence length of 512". Note that this is number of tokens, not the words. This I guess corresponds to max_input_length = 512 parameter. This is the maximum number of tokens that the underlying model can take. You can not change it.
But for longer text, you can do scripting to break it into 512 chunks, and feed them to the model. And I guess that is where max_source_length (length of text) is relevant.
I think it is a bit confusing. As in the paper, "We use a maximum sequence length of 512". Note that this is number of tokens, not the words. This I guess corresponds to max_input_length = 512 parameter. This is the maximum number of tokens that the underlying model can take. You can not change it.
But for longer text, you can do scripting to break it into 512 chunks, and feed them to the model. And I guess that is where max_source_length (length of text) is relevant.
With T5 you can change max input length. Relative positional embeddings make it possible to process arbitrary lengths, which is what T5 uses, as opposed to classical positional embeddings such as in the original transformer architecture.
It is just that when training, a length of 512 tokens is used because it is a trade-off between
processing long-enough texts while not using too much time and memory.
System Info
- `transformers` version: 4.16.2 - Python version: 3.8.12
Who can help?
@patrickvonplaten @saul
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
To reproduce:
No indexing errors
Expected behavior
The warning is wrong for T5 since it uses relative positional embeddings.
I would expect no warning, or otherwise, a warning about memory usage
I suppose this issue should apply to all models that do no have fixed length postional encodings
The text was updated successfully, but these errors were encountered: