Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

BoW metric: wrong definition? #30

Open
bertsky opened this issue Feb 14, 2023 · 0 comments
Open

BoW metric: wrong definition? #30

bertsky opened this issue Feb 14, 2023 · 0 comments

Comments

@bertsky
Copy link

bertsky commented Feb 14, 2023

In the implementation of the bag of word error rate, you pick the maximum over positive deltas (i.e. what you could call sum of false negative frequencies) vs. negative deltas (i.e. sum of false positive frequencies):

What's the logic behind this, what definition is this based upon?

I would expect BoW in terms of

  • an error rate (complement of accuracy) to be calculated as the sum of both deltas over the total count of tokens in both GT and OCR. Or for
  • false negative rate (complement of recall), as the positive deltas over the total count of tokens in GT. Or for
  • false discovery rate (complement of precision), as the negative deltas over the total count of tokens in OCR.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant