-
-
Notifications
You must be signed in to change notification settings - Fork 225
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mistake in computing average precision #8
Comments
@kylinXu I followed this mAP computation to get the mAP score. The mAP computation shows in the post is very clear. I hope the figure in the post helps you to understand the mAP computation. |
Yeah, I saw your blog on MAP computation. It's right and clear. I meant your implementation (computer_MAP.m) maybe have an error in computing average precision for each retrieval. Just as, |
It's true, thank you for point out the potential risk. I'll check out it today. |
You are welcome. I think the tiny problem in your code is that you just
considered the case of the average precision is computed over all the
expected results, but not the case of the average precision up to the top N
results. So, the result of your implementation is underestimated in the
latter case. Just look at the built-in script of MATLAB for computing AP as,
function ap = averagePrecision(actual,expected,N)
*if nargin > 2 *
* deltaRecall = min(N, numel(expected)); *
* % evaluate top N*
* actual = actual(1:min(N, numel(actual))); *
else
deltaRecall = numel(expected);
end
isRelevant = ismember(actual, expected);
% compute precision over results
precision = cumsum(isRelevant) .* isRelevant;
ap = sum(precision(:) ./ (1:numel(isRelevant))')/ deltaRecall;
But anyway, I think the simplest way is just computing the non-zeros items
from the retrieval results as I mentioned before. It seems to work in both
above cases. Thanks.
2017-05-20 6:01 GMT+04:00 Yong Yuan <notifications@github.com>:
… It's true, thank you for point out the potential risk. I'll check out it
today.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#8 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AIA3lKH3jk7Zpu7Ip6CD5qKWpqRLIEnTks5r7klmgaJpZM4Nfyle>
.
|
I just want to know the result when the pictures in Date base is very huge, about 10 million. Really hope receive your answer. |
It depends on your task, instance retrieval or similar retrieval. For similar retrieval (category retrieval), it's really OK.For instance retrieval, you might be interested in cnn-cbir-benchmark. There are some references which might be useful for you awesome-cbir-papers. |
Thanks for your answer, buy the way, how many pictures should I prepared for fine-tuning, I'm worried about that, because I can't labeled so many pictures . |
If you don't want to labeled so many pictures, you can choose method based on local feature. flickrdemo.videntifier.com is a demo based on SIFT feature. The performance is really promising. |
Hi, I think there is mistake in computing average precision as ap(i,1) = sum(precision)/queryClassNum. According to the formula, the denominator should be the non-zero items in precision vector rather than the number of retrieval categories, right?
The text was updated successfully, but these errors were encountered: