Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use transformers-interpret for sequencelabelling, for example layoutlmv3 or v3 #104

Open
deepanshudashora opened this issue Oct 7, 2022 · 1 comment

Comments

@deepanshudashora
Copy link

I was testing it on layoutlmv3 and I am facing one error

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
[<ipython-input-47-f0c042620a72>](https://localhost:8080/#) in <module>
----> 1 word_attributions = ner_explainer(Image.open("/content/receipt_00073.png").convert("RGB"), ignored_labels=['O'])

3 frames
[/usr/lib/python3.7/re.py](https://localhost:8080/#) in sub(pattern, repl, string, count, flags)
    192     a callable, it's passed the Match object and must return
    193     a replacement string to be used."""
--> 194     return _compile(pattern, flags).sub(repl, string, count)
    195 
    196 def subn(pattern, repl, string, count=0, flags=0):

TypeError: expected string or bytes-like object

The code I am using is

from transformers_interpret import TokenClassificationExplainer
cls_explainer = ner_explainer = TokenClassificationExplainer(
    model,
    processor.tokenizer,
)
word_attributions = ner_explainer(Image.open("/content/receipt_00073.png").convert("RGB"), ignored_labels=['O'])
@SuryaThiru
Copy link

Hi, I have a similar use case with LayoutLMv3ForTokenClassification and LayoutLMv3Processor. Would it be possible to intepret these models for token classification for datasets like SROIE?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants