Skip to content

Commit

Permalink
[v.1.5.0] Ensure linearIndex of advanced indexing backwards is contig… (
Browse files Browse the repository at this point in the history
#36962)

* [v.1.5.0] Ensure linearIndex of advanced indexing backwards is contiguous.

This is a more straightforward solution to the problem than #36957; I don't know about the relative performance.

Fixes: #36956

ghstack-source-id: 43c48eaee7232cd3ed2b108edbbee24c11e8321a
Pull Request resolved: #36959

* Fix test.
  • Loading branch information
gchanan committed Apr 20, 2020
1 parent d7bdffa commit 4ff3872
Show file tree
Hide file tree
Showing 2 changed files with 8 additions and 1 deletion.
2 changes: 1 addition & 1 deletion aten/src/ATen/native/cuda/Indexing.cu
Expand Up @@ -192,7 +192,7 @@ void index_put_accum_kernel(Tensor & self, TensorList indices, const Tensor & va
if (num_indices > 0 && sliceSize > 0) {
const bool permuted = !src.is_contiguous();
auto src_ = permuted ? src.contiguous() : src;
linearIndex = linearIndex.view(-1);
linearIndex = linearIndex.reshape(-1);
auto sorted_indices = at::empty_like(linearIndex, LEGACY_CONTIGUOUS_MEMORY_FORMAT);
auto orig_indices = at::empty_like(linearIndex, LEGACY_CONTIGUOUS_MEMORY_FORMAT);
using device_ptr = thrust::device_ptr<int64_t>;
Expand Down
7 changes: 7 additions & 0 deletions test/test_autograd.py
Expand Up @@ -5329,6 +5329,13 @@ def test_advanced_indexing_backwards_large(self, device):
a.sum().backward()
self.assertEqual(x.grad, torch.ones(n, 1, device=device))

def test_advanced_indexing_backwards_memory_format(self, device):
# See https://github.com/pytorch/pytorch/issues/36956
shape = (2, 8, 1, 2)
i = torch.randint(1, shape, device=device).contiguous(memory_format=torch.channels_last)
x = torch.randn(shape, requires_grad=True, device=device)
x[i].sum().backward()

# test for backward in https://github.com/pytorch/pytorch/issues/15511
def test_pdist_large(self, device):
def func(x):
Expand Down

0 comments on commit 4ff3872

Please sign in to comment.