Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The 15-transformer notebook has multiple issues with recent pytorch version #848

Open
Carlsans opened this issue Dec 15, 2023 · 1 comment

Comments

@Carlsans
Copy link

The legacy version of torchtext is no longer supported. I had to install an old one.
When trying to run this cell :
model = TransformerClassifier(num_layers=1, d_model=32, num_heads=2,
conv_hidden_dim=128, input_vocab_size=50002, num_answers=2)
model.to(device)


I'm getting this error :

RuntimeError Traceback (most recent call last)
Cell In[18], line 1
----> 1 model = TransformerClassifier(num_layers=1, d_model=32, num_heads=2,
2 conv_hidden_dim=128, input_vocab_size=50002, num_answers=2)
3 model.to(device)

Cell In[17], line 5, in TransformerClassifier.init(self, num_layers, d_model, num_heads, conv_hidden_dim, input_vocab_size, num_answers)
2 def init(self, num_layers, d_model, num_heads, conv_hidden_dim, input_vocab_size, num_answers):
3 super().init()
----> 5 self.encoder = Encoder(num_layers, d_model, num_heads, conv_hidden_dim, input_vocab_size,
6 maximum_position_encoding=10000)
7 self.dense = nn.Linear(d_model, num_answers)

Cell In[11], line 9, in Encoder.init(self, num_layers, d_model, num_heads, ff_hidden_dim, input_vocab_size, maximum_position_encoding, p)
6 self.d_model = d_model
7 self.num_layers = num_layers
----> 9 self.embedding = Embeddings(d_model, input_vocab_size,maximum_position_encoding, p)
11 self.enc_layers = nn.ModuleList()
12 for _ in range(num_layers):

Cell In[10], line 17, in Embeddings.init(self, d_model, vocab_size, max_position_embeddings, p)
15 self.word_embeddings = nn.Embedding(vocab_size, d_model, padding_idx=1)
16 self.position_embeddings = nn.Embedding(max_position_embeddings, d_model)
---> 17 create_sinusoidal_embeddings(
18 nb_p=max_position_embeddings,
19 dim=d_model,
20 E=self.position_embeddings.weight
21 )
23 self.LayerNorm = nn.LayerNorm(d_model, eps=1e-12)

Cell In[10], line 6, in create_sinusoidal_embeddings(nb_p, dim, E)
1 def create_sinusoidal_embeddings(nb_p, dim, E):
2 theta = np.array([
3 [p / np.power(10000, 2 * (j // 2) / dim) for j in range(dim)]
4 for p in range(nb_p)
5 ])
----> 6 E[:, 0::2] = torch.FloatTensor(np.sin(theta[:, 0::2]))
7 E[:, 1::2] = torch.FloatTensor(np.cos(theta[:, 1::2]))
8 E.requires_grad = False

RuntimeError: a view of a leaf Variable that requires grad is being used in an in-place operation.

Anyway, thank you for this great course. I'm really enjoying learning about deep learning :)

@Carlsans
Copy link
Author

The last error can be solved easily by adding detach() :
E[:, 0::2] = torch.FloatTensor(np.sin(theta[:, 0::2])).detach()
E[:, 1::2] = torch.FloatTensor(np.cos(theta[:, 1::2])).detach()

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant