Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

AttributeError: Can't gather from tf tensor. #119

Open
nestyme opened this issue Aug 2, 2020 · 3 comments
Open

AttributeError: Can't gather from tf tensor. #119

nestyme opened this issue Aug 2, 2020 · 3 comments

Comments

@nestyme
Copy link

nestyme commented Aug 2, 2020

Hi! Thanks for implementing this necessary conversion 馃憤
I am stuck with error 'AttributeError: Can't gather from tf tensor.' when I'm trying to export my PyTorch model to Keras.

'k_model = pytorch_to_keras(model, input_var, (1,None),
verbose=True, name_policy='short',
change_ordering=True)'

PyTorch model:

class SentenceEmbeddingsModel(nn.Module):
    def __init__(self,
                 vocab_size,embedding_dim,
                 max_length=40,
                 word_vectors=None,
                 device=device,
                 C=0.001,d_a=10,r_a=4,
                 hidden_size=100):
        super(SentenceEmbeddingsModel, self).__init__()

        self.embeddings = nn.Embedding(vocab_size, embedding_dim)

        self.d_a = d_a
        self.C = C
        self.r_a = r_a
        self.rnn_hidden_size = hidden_size

        w =torch.FloatTensor(word_vectors)
        self.embeddings = self.embeddings.from_pretrained(w)
        self.embeddings.weight.requires_grad = False
        ws_d = embedding_dim

        self.ws1 = nn.Parameter(torch.FloatTensor(1, self.d_a, ws_d))
        nn.init.xavier_uniform_(self.ws1)
        self.ws1.requires_grad = True

        self.ws2 = nn.Parameter(torch.FloatTensor(1, self.r_a, self.d_a))
        nn.init.xavier_uniform_(self.ws2)
        self.ws2.requires_grad = True

        self.dropout1 = nn.Dropout(0.1)

        self.device = device

        self.dense = nn.Sequential(
            nn.Linear(ws_d, 20, bias=True),
            nn.ReLU(),
            nn.Dropout(0.1),
            nn.Linear(20, num_classes, bias=True),
        )

        self.linear = nn.Linear(ws_d * self.r_a, ws_d)

    def forward(self, inputs):
        e = self.embeddings(inputs)
        mask = (inputs != 0)[:, :, None].float().to(self.device)
        masked = e.mul(mask)
        r = self.dropout1(masked)
        z = r

        a1 =torch.tanh(self.ws1.matmul(z.transpose(dim0=1, dim1=2)))

        attention = F.softmax(self.ws2.matmul(a1), dim=2)  # n_batch - r_a - max_lentgh
        m = attention.matmul(z)  # n_batch - r_a - ws_d

        # here we get r_a * ws_d embedding matrix per sentence
        flatten = m.view(z.shape[0], -1, 1)[:, :, 0]

        m = self.linear(flatten)

        out = torch.sigmoid(self.dense(m))
        return out
```

Is anyone able to help me?
Thanks
@sampathpagolu
Copy link

any update on this error?

@mAkeddar
Copy link

Hello,

Is there any update on this issue ?

@ovshake
Copy link

ovshake commented Aug 9, 2023

facing this issue, any updates?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants