Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Only one annotation per test image and different evaluation for GazeFollow and VideoAttention dataset #12

Open
Frandre opened this issue Nov 13, 2021 · 0 comments

Comments

@Frandre
Copy link

Frandre commented Nov 13, 2021

Dear authors,

Thanks for sharing your code and data.

I found that:

  1. Though claimed two annotations are available on each test image, it seems that in the released annotation, we have only one annotation per-image. May I ask where we can download the full annotations on your test set?
  2. In your released code, you use different methods (to compute AUC) for Gazefollow and VideoAttention dataset. For instance, on GazeFollow you use the original annotations (10 points) to compute multi-hot vector. On your own dataset, you put a Gaussian on top of the only one annotation, set all values that greater than 0 to 1 and then use such binary map as the multi-hot vector. But in your paper, you only define AUC once. Could you please confirm whether there are two different versions of AUC used in your paper or not?

Cheers,
Yu

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant