Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

yolov3 tiny - evaluation on pretrained weights gives lower accuracy than expected #391

Open
chenhayat opened this issue Jan 26, 2022 · 1 comment

Comments

@chenhayat
Copy link

chenhayat commented Jan 26, 2022

According to https://pjreddie.com/darknet/yolo/ website accuracy on pre-trained weights using Coco dataset should be 0.331 mAP:
Model | Train | Test | mAP | FLOPS | FPS | Cfg | Weights
YOLOv3-tiny | COCO trainval | test-dev | 33.1 | 5.56 Bn | 220 | cfg | weights

However when I evaluate it on Coco 2017 with 2000 images I get: 0.161 mAP.
Update: it seems that pre-trained weights were generate with a different mask than the mask in repo.
After changing the mask from:
yolo_tiny_anchor_masks = np.array([[3, 4, 5], [0, 1, 2]])
to:
yolo_tiny_anchor_masks = np.array([[3, 4, 5], [1, 2, 3]])
I get a better accuracy for the baseline: 0.252.

But still the accuracy is lower than expected 0.331 mAP.
What is the baseline accuracy you got for the tiny model?
Do you have any suggestion what needs to be changed in order to get it ?

@lilian-zh
Copy link

Hello, do you mind advise how to evaluate using mAP, since there is no related script in the repo? Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants