You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am trying to get word attributions running for the Longformer model.
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
from transformers_interpret import QuestionAnsweringExplainer
question = "How many programming languages does BLOOM support?"
context = "BLOOM has 176 billion parameters and can generate text in 46 languages natural languages and 13 programming languages."
tokenizer = AutoTokenizer.from_pretrained("valhalla/longformer-base-4096-finetuned-squadv1")
model = AutoModelForQuestionAnswering.from_pretrained("valhalla/longformer-base-4096-finetuned-squadv1")
qa_explainer = QuestionAnsweringExplainer(
model,
tokenizer,
)
word_attributions = qa_explainer(
question,
context,
)
print(word_attributions)
print(qa_explainer.predicted_answer)
qa_explainer.visualize("bert_qa_viz.html")
With this code I get the following error. AssertionError: There should be exactly three separator tokens: 2 in every sample for questions answering. You might also consider to set `global_attention_mask` manually in the forward function to avoid this error.
Is this expected? How do I get the word attributions for the Longformer model (if I can)?
The text was updated successfully, but these errors were encountered:
I am trying to get word attributions running for the Longformer model.
With this code I get the following error.
AssertionError: There should be exactly three separator tokens: 2 in every sample for questions answering. You might also consider to set `global_attention_mask` manually in the forward function to avoid this error.
Is this expected? How do I get the word attributions for the Longformer model (if I can)?
The text was updated successfully, but these errors were encountered: