Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unsuppressable warning: "<model> will not detect padding tokens in inputs_embeds" #30871

Open
2 of 4 tasks
naimenz opened this issue May 16, 2024 · 1 comment
Open
2 of 4 tasks

Comments

@naimenz
Copy link

naimenz commented May 16, 2024

System Info

  • transformers version: 4.39.3
  • Platform: macOS-13.4-arm64-arm-64bit
  • Python version: 3.10.13
  • Huggingface_hub version: 0.22.2
  • Safetensors version: 0.4.3
  • Accelerate version: 0.29.2
  • Accelerate config: not found
  • PyTorch version (GPU?): 2.2.2 (False)
  • Tensorflow version (GPU?): 2.16.1 (False)
  • Flax version (CPU?/GPU?/TPU?): not installed (NA)
  • Jax version: not installed
  • JaxLib version: not installed
  • Using GPU in script?: no
  • Using distributed or parallel set-up in script?: no

Who can help?

No response

Information

  • The official example scripts
  • My own modified scripts

Tasks

  • An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
  • My own task or dataset (give details below)

Reproduction

Running this example prints a warning for every loop, despite there being no padding.

from transformers import AutoTokenizer, GPT2Model, GPT2ForSequenceClassification

model = GPT2ForSequenceClassification.from_pretrained("gpt2")
assert isinstance(model, GPT2ForSequenceClassification)
model.config.pad_token_id = model.config.eos_token_id

tokenizer = AutoTokenizer.from_pretrained("gpt2")
text = ["Hello, my dog is cute."]
inp = tokenizer(text, return_tensors="pt")
embeds = model.get_input_embeddings()(inp["input_ids"])

for i in range(3):
    out = model(inputs_embeds=embeds)

Output:

GPT2ForSequenceClassification will not detect padding tokens in `inputs_embeds`. Results may be unexpected if using padding tokens in conjunction with `inputs_embeds.`
GPT2ForSequenceClassification will not detect padding tokens in `inputs_embeds`. Results may be unexpected if using padding tokens in conjunction with `inputs_embeds.`
GPT2ForSequenceClassification will not detect padding tokens in `inputs_embeds`. Results may be unexpected if using padding tokens in conjunction with `inputs_embeds.`

Expected behavior

The warning is printed once, or at least there is some way to disable the warning.

I was split between whether this should be a bug report or a feature request. It makes sense to display this warning, but in my project I need to run on embeddings often and the warning really spams the logs.

For a while I was only running batches of size 1 so I was suppressing the warning by temporarily setting model.config.pad_token_id = None. The problem with this is that I then can't run batches of size >1 even if I'm careful to make them the same length with no padding tokens.

I'm not sure of the best way to handle this, but either using the warnings library to make it only print once and/or allow it to be suppressed would help, or having some flag to disable the warning.

The earliest instance of the string will not detect padding tokens in in the codebase I could find was from #7501.

@amyeroberts
Copy link
Collaborator

Hi @naimenz, thanks for opening this issue / feature request!

As you mentioned, I don't think we'd want to remove the warning completely, it's telling you something quite useful. One option would be to replace logger.warning with logger.warning_once so that the message is only emitted once per session. If you think this is a suitable solution, feel free to open a PR and I'll be happy to review 🤗

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants