Skip to content

Commit

Permalink
Support Latent Variable Model in base training (#879)
Browse files Browse the repository at this point in the history
Summary:
Pull Request resolved: #879

Pull Request resolved: pytorch/translate#598

Details in https://fb.workplace.com/notes/ning-dong/closing-research-to-production-gap-a-story-of-latent-variable-model-migration/443418839813586/

Reviewed By: xianxl

Differential Revision: D15742439

fbshipit-source-id: 168c84bd30a5da3c2fb404fcca74266deef1f964
  • Loading branch information
cndn authored and facebook-github-bot committed Jul 17, 2019
1 parent e46b924 commit 1f5b414
Showing 1 changed file with 2 additions and 1 deletion.
3 changes: 2 additions & 1 deletion fairseq/modules/learned_positional_embedding.py
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,8 @@ def forward(self, input, incremental_state=None, positions=None):
if positions is None:
if incremental_state is not None:
# positions is the same for every token when decoding a single step
positions = input.data.new(1, 1).fill_(self.padding_idx + input.size(1))
# Without the int() cast, it doesn't work in some cases when exporting to ONNX
positions = input.data.new(1, 1).fill_(int(self.padding_idx + input.size(1)))
else:
positions = utils.make_positions(
input.data, self.padding_idx, onnx_trace=self.onnx_trace,
Expand Down

0 comments on commit 1f5b414

Please sign in to comment.