New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How do I evaluate accuracy of the test set? #613
Comments
https://github.com/thtrieu/darkflow/blob/master/darkflow/utils/box.py has basic implementation needed for evaluating or computing the overlap between two images. You can modify it as per your need and compute the mean average precision as a multiplication of probability of class and box_iou |
Isn't there anyone who has done it before? |
I would also be interested in this feature. |
Hello @off99555 , i am searching for calculate mAP from my own test data too. Thanks for your help |
@andikira I've not done it yet. I'm not sure too. |
Hello @off99555 , you have to see this repo https://github.com/Cartucho/mAP then see the extra folder, you can convert .xml and .json file, that repo works perfectly with darkflow. |
@andikira @thtrieu I trained the yolo2-voc network on PASCAL V0C 0712 trainval set and tested on PASCAL VOC 2007 test set, and the mAP performance is only around 52% (after quite a lot of epoches (200~) considering that I initialized it with pretrained weights). It is quite below the official performance(76.8% mAP). I want to include experimental results with my idea using YOLO v2 on my paper. Could anyone know the reason? |
i have seen that issue before, please check another issue @bareblackfoot |
@andikira The |
Can you
Could you please explain more specificially ? how does it work ? how did you get the results? |
@srhtyldz Go to the https://github.com/Cartucho/mAP website and go to quickstart section. |
Also you need to convert your files from darkflow format to their format. They provide the script to convert darkflow format to their format here: https://github.com/Cartucho/mAP/tree/master/extra |
I found out that darknet has map property .Did you try that ? Is there a difference between them or which one is more accurate ? |
@srhtyldz I think that the algorithm is the same. Except that |
First I use |
And most importantly, the owner of |
Which variable do you want to specify ?
My opinion is also for darknet.I think you can only switch with threshold by calculation mAP ? right ? |
@srhtyldz In Maybe it allows you to specify the output path now but you have to check. I am not sure.
If I remember correctly, I am not sure what I'm talking about though. Because I have already forgotten most of it. You have to learn it to make sure that I'm right. But I'm pretty sure about the number PS. I went back to find my thesis slide (when I know what mAP means), here is how I described it back then: |
Thanks a lot . I'll research mAP and IoU. I got the result but i couldn't interpret them.Can i ask one more question ? did you use 'recall' function ? map and recall function have same functionality or not ? do you have any ideas_? |
@srhtyldz |
I got 90.4% from mAP.I think it performs well. |
@srhtyldz That's very high if you have many classes. (Popular models on big dataset only does around 50-60%) But even if you have only two classes, it's still good. |
I have only one class.Also, i saved all images with bounding box. |
From |
The problem is that i don't know how animation will be run.I tried to do that while training but i couldnt get animation for map.@AlexeyAB maybe will help about this. |
I got mAP of my test results. From 56 images I got 48 images correct for my testing. I have four classes . How to make confusion matrix from that . Can someone help me . |
@srhtyldz I used /darknet detector map cfg/aerial.data cfg/test_tiny.cfg backup\my_yolov3_tiny_final.weights function but I could not get any score or graph at the end of the training. How did you get mAP score in darknet |
@mustafabuyuk Pass the -map argument to ./darknet (darknet.exe). It will show up on the graph at around 1000 iterations. I'm not sure if it's needed, but I also have a validation set of images too. |
Can anyone have idea? YOLOv5 calculates mAP against validation dataset during training. Though which command we can calculate the against unseen test dataset (mAP, and Class wise mAP) |
@off99555 The Thread seems closed. Do you get the answer? How to use -map argument on validating the Images to get the mAP score? Will it be possible to derive the confusion matrix from this? |
@sandeeprepakula Read my answer directly before I closed the issue. That's how you can calculate mAP. |
@off99555 Thanks for the suggestion |
One way is to calculate "Mean average precision: mAP".
But I'm not aware of this feature implemented in darkflow.
Do you have any suggestions on which darknet repo or person has written a script to do this?
The text was updated successfully, but these errors were encountered: