Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NER Evaluation #21

Open
hatzel opened this issue Feb 1, 2024 · 0 comments
Open

NER Evaluation #21

hatzel opened this issue Feb 1, 2024 · 0 comments

Comments

@hatzel
Copy link

hatzel commented Feb 1, 2024

This may relate more to the paper than the tool, sorry if it is the wrong channel.

How exactly is your NER evaluation performed. The relevant section in the paper is as follows:

We only consider the complete matching and use the micro F1 to evaluate NER task.
Only when both the border and the type of the predicted entity and the true entity are the same will we regard it as a correct prediction.

Given your example prompts in the appendix only list expected outputs of the form: ["Japan", "LOC"], there is no direct access to a border/offset. Do you just look for the answer text in the original sentence and take that index?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant