Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

why use a alpha in NCALoss ? #14

Open
Andrewymd opened this issue Nov 18, 2018 · 2 comments
Open

why use a alpha in NCALoss ? #14

Andrewymd opened this issue Nov 18, 2018 · 2 comments

Comments

@Andrewymd
Copy link

Hi, i have a question about the alpha which is a Hyperparameter

base = torch.mean(dist_mat[i]).data[0]

计算logit, base的作用是防止超过计算机浮点数

pos_logit = torch.sum(torch.exp(self.alpha*(base - pos_neig)))
neg_logit = torch.sum(torch.exp(self.alpha*(base - neg_neig)))
loss_ = -torch.log(pos_logit/(pos_logit + neg_logit))

In your implementation, I found that you first used K-nearest neighbors to select negative samples and then calculated the mean of the distances.
Maybe because the result is too small, so multiply by a parameter alpha?

@bnu-wangxun
Copy link
Owner

bnu-wangxun commented Nov 20, 2018

It is very important hyper-parameter. alpha is in [20, 60]. The porpose of this hyper-parameter is to make the loss focus on harder negative samples.

Small alpha is not a problem.

@Andrewymd
Copy link
Author

Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants