Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Binary Classification: How is predicted label computed? #98

Open
simonschoe opened this issue Aug 10, 2022 · 0 comments
Open

Binary Classification: How is predicted label computed? #98

simonschoe opened this issue Aug 10, 2022 · 0 comments

Comments

@simonschoe
Copy link

Hi there,

I am observing the following (strange) behavior when using pipeline from the transformers library and transformer-interpret:

text = "Now Accord networks is a company in video, and he led the sales team, and the marketing group at Accord, and he took it from start up, sound familiar, it's from start up to $60 million company in two years."
classifier = pipeline('text-classification',  model=model, tokenizer=tokenizer, device=0)
classifier(text)
[{'label': 'LABEL_1', 'score': 0.9711543321609497}]

while transformer-interpret gives me slightly different scores:

explainer = SequenceClassificationExplainer(model, tokenizer)
attributions = explainer(text)
html = explainer.visualize()

image

In both cases I apply the exact same model and tokenizer...

I am grateful for any hint and/or advice! 🤗

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant