New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
About AP #124
Comments
Hi! We have compared with the original model from the paper "Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields". As you can see in the table 4 of paragraph 3.2 the AP is 58.4%. It will increase to 61%, if do an additional refinement for each found person with a separate model for single person pose estimation (CPM). And those 58.4% was obtained in the multi-scale testing mode (6 scales). 48.6% of AP is obtained using a single scale for input data during testing. |
Thank you for your reply!what is 6 scales?It means that one initial stage and five refinementstages?
获取 Outlook for iOS<https://aka.ms/o0ukef>
…________________________________
发件人: Daniil-Osokin <notifications@github.com>
发送时间: Friday, December 18, 2020 11:14:00 PM
收件人: Daniil-Osokin/lightweight-human-pose-estimation.pytorch <lightweight-human-pose-estimation.pytorch@noreply.github.com>
抄送: augenstern-lwx <liwenxingICT@outlook.com>; Author <author@noreply.github.com>
主题: Re: [Daniil-Osokin/lightweight-human-pose-estimation.pytorch] About AP (#124)
Hi! We have compared with the original model from the paper "Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields"<https://arxiv.org/pdf/1611.08050.pdf>. As you can see in the table 4 of paragraph 3.2 the AP is 58.4%. It will increase to 61%, if do an additional refinement for each found person with a separate model for single person pose estimation (CPM). And those 58.4% was obtained in the multi-scale testing mode (6 scales). 48.6% of AP is obtained using a single scale for input data during testing.
―
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub<#124 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AP5B6GF7L7XYMR3P3ZLMBGDSVNWTRANCNFSM4U65RKFQ>.
|
Network inference was performed 4 times (not 6, it is my mistake), each time with different input image resolution (different scale). Then all network outputs were averaged. You can check the validation script for the details, it supports multi-scale option. |
Thank you!Why was multi-scales not used at that time,after all,this method can achieve higher AP?
获取 Outlook for iOS<https://aka.ms/o0ukef>
…________________________________
发件人: Daniil-Osokin <notifications@github.com>
发送时间: Saturday, December 19, 2020 5:18:55 AM
收件人: Daniil-Osokin/lightweight-human-pose-estimation.pytorch <lightweight-human-pose-estimation.pytorch@noreply.github.com>
抄送: augenstern-lwx <liwenxingICT@outlook.com>; Author <author@noreply.github.com>
主题: Re: [Daniil-Osokin/lightweight-human-pose-estimation.pytorch] About AP (#124)
Network inference was performed 4 times (not 6, it is my mistake), each time with different input image resolution (different scale). Then all network outputs were averaged. You can check the validation script<https://github.com/Daniil-Osokin/lightweight-human-pose-estimation.pytorch/blob/2df5db059db1a043169b65b633d7bb3b8efd13a6/val.py#L117> for the details, it supports multi-scale option.
―
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub<#124 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AP5B6GHRN44MSCXSFQTIEN3SVPBL7ANCNFSM4U65RKFQ>.
|
And I wonder if the loss function is different from the original OpenPose?
获取 Outlook for iOS<https://aka.ms/o0ukef>
…________________________________
发件人: 李 文星 <liwenxingICT@outlook.com>
发送时间: Saturday, December 19, 2020 10:39:10 AM
收件人: Daniil-Osokin/lightweight-human-pose-estimation.pytorch <reply@reply.github.com>; Daniil-Osokin/lightweight-human-pose-estimation.pytorch <lightweight-human-pose-estimation.pytorch@noreply.github.com>
抄送: Author <author@noreply.github.com>
主题: Re: [Daniil-Osokin/lightweight-human-pose-estimation.pytorch] About AP (#124)
Thank you!Why was multi-scales not used at that time,after all,this method can achieve higher AP?
获取 Outlook for iOS<https://aka.ms/o0ukef>
________________________________
发件人: Daniil-Osokin <notifications@github.com>
发送时间: Saturday, December 19, 2020 5:18:55 AM
收件人: Daniil-Osokin/lightweight-human-pose-estimation.pytorch <lightweight-human-pose-estimation.pytorch@noreply.github.com>
抄送: augenstern-lwx <liwenxingICT@outlook.com>; Author <author@noreply.github.com>
主题: Re: [Daniil-Osokin/lightweight-human-pose-estimation.pytorch] About AP (#124)
Network inference was performed 4 times (not 6, it is my mistake), each time with different input image resolution (different scale). Then all network outputs were averaged. You can check the validation script<https://github.com/Daniil-Osokin/lightweight-human-pose-estimation.pytorch/blob/2df5db059db1a043169b65b633d7bb3b8efd13a6/val.py#L117> for the details, it supports multi-scale option.
―
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub<#124 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AP5B6GHRN44MSCXSFQTIEN3SVPBL7ANCNFSM4U65RKFQ>.
|
Using single or multiple scales for inference is a speed/accuracy trade-off. Loss function is the same. |
Thanks, I'd like to know how to calculate the loss after the combination of Heatmaps and PAFs stages?Because the original OpenPose is calculated by two stages. |
It is just a sum of all losses for heatmaps and pafs. You may check the training script for more details. |
Thank you.
获取 Outlook for iOS<https://aka.ms/o0ukef>
…________________________________
发件人: Daniil-Osokin <notifications@github.com>
发送时间: Sunday, December 20, 2020 11:23:10 PM
收件人: Daniil-Osokin/lightweight-human-pose-estimation.pytorch <lightweight-human-pose-estimation.pytorch@noreply.github.com>
抄送: augenstern-lwx <liwenxingICT@outlook.com>; Author <author@noreply.github.com>
主题: Re: [Daniil-Osokin/lightweight-human-pose-estimation.pytorch] About AP (#124)
It is just a sum of all losses for heatmaps and pafs. You may check the training script<https://github.com/Daniil-Osokin/lightweight-human-pose-estimation.pytorch/blob/master/train.py> for more details.
―
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub<#124 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/AP5B6GEFLZNRTCBE4U5PCRLSVYJF5ANCNFSM4U65RKFQ>.
|
You wrote in the essay ‘The accuracy of the optimized version nearly matches the baseline: Average Precision (AP) drop is |
We have compared with the baseline with 1 refinement stage (see the table 2, AP after refinement stage 1 is 43.4%). Our final model has the 42.8% AP (see the table 5 in the paper). AP was measured with pycocotools. |
Hi, thanks for your work! I have a question. Why is the accuracy of 61.8 in the original OpenPose paper and 48.6 in your analysis of the original OpenPose?
The text was updated successfully, but these errors were encountered: