Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is dot product the right way to predict? #157

Open
JoaoLages opened this issue Apr 24, 2019 · 2 comments
Open

Is dot product the right way to predict? #157

JoaoLages opened this issue Apr 24, 2019 · 2 comments
Labels

Comments

@JoaoLages
Copy link

While training implicit sequence models, we use losses like hinge, bpr and pointwise. These losses don't maximize directly the dot product, so why do we use it while predicting?

@maciejkula
Copy link
Owner

These losses maximize the difference between the dot products of the positive and (implicit) negative items, and so using the dot product for prediction is appropriate.

@nilansaha
Copy link

@JoaoLages A bit late to the party but what we are really optimizing here are the embeddings for user or items whatever. The dot product is a mere operation to combine the two embeddings into one result. The backprop basically goes through the dot product and changes the embeddings in such a way that we get the results we want i.e. maximize the end result for positive items and minimize the result for the negative items. Correct me if I am wrong @maciejkula

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants