Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mistake in computing average precision #8

Open
kylinXu opened this issue May 18, 2017 · 8 comments
Open

Mistake in computing average precision #8

kylinXu opened this issue May 18, 2017 · 8 comments

Comments

@kylinXu
Copy link

kylinXu commented May 18, 2017

Hi, I think there is mistake in computing average precision as ap(i,1) = sum(precision)/queryClassNum. According to the formula, the denominator should be the non-zero items in precision vector rather than the number of retrieval categories, right?

@willard-yuan
Copy link
Owner

@kylinXu I followed this mAP computation to get the mAP score. The mAP computation shows in the post is very clear. I hope the figure in the post helps you to understand the mAP computation.

@kylinXu
Copy link
Author

kylinXu commented May 19, 2017

Yeah, I saw your blog on MAP computation. It's right and clear. I meant your implementation (computer_MAP.m) maybe have an error in computing average precision for each retrieval. Just as,
queryClassNum = double(classesAndNum{1, 2}(row1,1));
ap(i,1) = sum(precision)/queryClassNum;
The denominator in above formula should be a variable depending on the retrieval results rather than a constant. So I slightly revised it as,
retrievalSamples=sum(precision~=0);
ap(i,1) = sum(precision)/retrievalSamples;
and it seems return me a right results for my problem. Thanks.

@willard-yuan
Copy link
Owner

It's true, thank you for point out the potential risk. I'll check out it today.

@kylinXu
Copy link
Author

kylinXu commented May 20, 2017 via email

@yuyifan1991
Copy link

I just want to know the result when the pictures in Date base is very huge, about 10 million. Really hope receive your answer.

@willard-yuan
Copy link
Owner

It depends on your task, instance retrieval or similar retrieval. For similar retrieval (category retrieval), it's really OK.For instance retrieval, you might be interested in cnn-cbir-benchmark. There are some references which might be useful for you awesome-cbir-papers.

@yuyifan1991
Copy link

Thanks for your answer, buy the way, how many pictures should I prepared for fine-tuning, I'm worried about that, because I can't labeled so many pictures .

@willard-yuan
Copy link
Owner

If you don't want to labeled so many pictures, you can choose method based on local feature. flickrdemo.videntifier.com is a demo based on SIFT feature. The performance is really promising.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants