Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why Skip-gram models need 2 embedding layer ? #7

Open
yu45020 opened this issue Apr 10, 2018 · 0 comments
Open

Why Skip-gram models need 2 embedding layer ? #7

yu45020 opened this issue Apr 10, 2018 · 0 comments

Comments

@yu45020
Copy link

yu45020 commented Apr 10, 2018

Hi SungDong. Thanks for the great posts. I am reading the first two models on skip-gram. Why do you use two embedding instead of one? The second embedding_u has all the same weights for each row after I train it. Based on the formula on this model, I think it should have only one embedding for all word vectors. Am I missing some details ?


Is the second matrix used for efficiency ? I guess the second matrix can be replace by a linear transformation with the transpose size. But the prediction is a one-hot vector, so it is a waste to compute bunches of zeros. A matrix look up is far more efficient.

class Skipgram(nn.Module):
    
    def __init__(self, vocab_size, projection_dim):
        super(Skipgram,self).__init__()
        self.embedding_v = nn.Embedding(vocab_size, projection_dim)
        self.embedding_u = nn.Embedding(vocab_size, projection_dim)

        self.embedding_v.weight.data.uniform_(-1, 1) # init
        self.embedding_u.weight.data.uniform_(0, 0) # init
        #self.out = nn.Linear(projection_dim,vocab_size)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant