Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem with negative samples #66

Open
sky-2002 opened this issue Jul 31, 2023 · 3 comments
Open

Problem with negative samples #66

sky-2002 opened this issue Jul 31, 2023 · 3 comments

Comments

@sky-2002
Copy link

Hi @LittlePea13 , I was just experimenting with REBEL, trying to see how it responds.
Following are a few negative samples and relations extracted:

# Negative Samples
negative_samples = [
    "Messi never played for China",
    "Messi was not a member of Portugal Football team",
    "Obama is not a resident of India",
]

# Relations extracted
[{'head': 'Messi', 'type': 'member of sports team', 'tail': 'China'}]
[{'head': 'Messi', 'type': 'member of sports team', 'tail': 'Portugal Football team'}]
[{'head': 'Obama', 'type': 'residence', 'tail': 'India'}]

These negative samples are factually correct, but they extract relations not giving enough importance to the negation word. What are your views on it?

@LittlePea13
Copy link
Collaborator

Hi there!

Those are indeed very good examples of the shortcomings of REBEL. Negation is usually hard to handle by many models, and it seems REBEL is also vulnerable to it. We didn't explore hard/negative examples with detail, but I imagine there could be some robust training approaches that could help handle these situations.

@sky-2002
Copy link
Author

sky-2002 commented Aug 6, 2023

Thanks @LittlePea13 , can you point me to resources that discuss about this thing, handling negations. Because I think this can cause issues, if its not giving enough emphasis on the negations. What approaches can I use, specifically during inference? I am looking for some solution at inference time, because I am not training or finetuning REBEL at the moment. @m0baxter @tomasonjo

@LittlePea13
Copy link
Collaborator

Sorry, I was a bit AFK during August. I am afraid it would be very hard to deal with this at inference time. You could try to run an NLI model, or this Triplet Critic to give a probability score to each triplet and based on that filter out possible negatives. I expect the Triplet Critic to work better for neutral statements rather than contradictions, but you can use NLI to help out there as well. You can use the model I linked, as if you check how to use its multitask version, you should be able to obtain both NLI and Triplet Critic scores with a single forward pass.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants