You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Could you please tell me, how the calculation for METRIC.Precision is done for the hap.py statistics summary?
According to the docs, the following formula is used: Precision = TP/(TP+FP)
So I tried to calculate the Precision with TRUTH.TP/(TRUTH.TP+QUERY.FP), however I get different results compared to the METRIC.Precision. Is this behaviour expected? And if yes, could you please provide me with the formula (or correct column names) to get METRIC.Precision correctly?
See below the examples from your git repo. The behaviour is observed for
Dear hap.py development team,
Could you please tell me, how the calculation for METRIC.Precision is done for the hap.py statistics summary?
According to the docs, the following formula is used:
Precision = TP/(TP+FP)
So I tried to calculate the Precision with
TRUTH.TP/(TRUTH.TP+QUERY.FP)
, however I get different results compared to theMETRIC.Precision
. Is this behaviour expected? And if yes, could you please provide me with the formula (or correct column names) to getMETRIC.Precision
correctly?See below the examples from your git repo. The behaviour is observed for
vcfeval...
... and happy
... but not for unhappy
Thank you very much for your time and I'm looking forward to hearing from you soon.
Best regards
Barbara
The text was updated successfully, but these errors were encountered: