Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

big AFLW200 loss when hopenet trained locally, and how to reproduce the models #112

Open
simin75simin opened this issue Dec 8, 2021 · 0 comments

Comments

@simin75simin
Copy link

I got around 30 yaw loss, and around 15 pitch and roll loss on AFLW2000 with the provided hopenet.
I re-implemented this with generally the same methods in tensorflow with different backbones like resnet50 and my own DCNN, trained them on 300W_LP and got similar test loss on AFLW2000.
But I saw some people mentioned that the pretrained models have around 10 yaw loss and around 5 pitch and roll losses, which is qutie the difference. When I set up a camera for the given model it works pretty well but not my own model.
So my question is basically how to reproduce this? Because I need to make the network smaller for work.

Thanks in advance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant