Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluation on HumanAct12 unclear #167

Open
anthony-mendil opened this issue Nov 6, 2023 · 3 comments
Open

Evaluation on HumanAct12 unclear #167

anthony-mendil opened this issue Nov 6, 2023 · 3 comments

Comments

@anthony-mendil
Copy link

First of all, thanks for your great work.
In table 3 of the respective Paper you report evaluation metrics on the HumanAct12 dataset.
I would like to compare my model results with yours, but that requires to know some aspects not mentioned in the paper.

  • what weights for the different loss components produced your results?
  • which pose representation are the results based on and was translation included?

Kind regards, Anthony Mendil.

@sigal-raab
Copy link
Collaborator

Thank you @anthony-mendil for your interest.
For the first variation, I used --lambda_rcxyz 0 --lambda_vel 0 --lambda_fc 1.
For the second variation, I used --lambda_rcxyz 1 --lambda_vel 1 --lambda_fc 0.
For your convenience, when downloading the pre-trained models, you can find the value of all the arguments in the file args.json.

@anthony-mendil
Copy link
Author

anthony-mendil commented Nov 14, 2023

Thanks for the info!
Is there a reason why you do not use all three together?

@sigal-raab
Copy link
Collaborator

You should use all three together.
While geometric losses enhance the quality of the results, they marginally compromise quantitative metrics. As such, they are not used simultaneously in our tables, in pursuit of metric superiority.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants