Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable UFMT on test_indexing&test_view_ops #125112

Closed
wants to merge 5 commits into from

Conversation

MatrixPlayer
Copy link
Contributor

@MatrixPlayer MatrixPlayer commented Apr 28, 2024

Part of #123062

cc @ezyang

Copy link

pytorch-bot bot commented Apr 28, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/125112

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure

As of commit 8158113 with merge base 7478b7f (image):

NEW FAILURE - The following job has failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@ezyang
Copy link
Contributor

ezyang commented Apr 28, 2024

This diff doesn't seem right

diff --git b/test/test_indexing.py a/test/test_indexing.py
index f34fa4c5669..21c4624e23b 100644
--- b/test/test_indexing.py
+++ a/test/test_indexing.py
@@ -861,14 +861,12 @@ class TestIndexing(TestCase):
         )
         uint8Indices = torch.tensor([1, 0, 0], dtype=torch.uint8, device=device)
         with warnings.catch_warnings(record=True) as w:
-            v1 = v[boolIndices]
-            v2 = v[uint8Indices]
-            self.assertEqual(v1.shape, v2.shape)
-            self.assertEqual(v1, v2)
+            self.assertEqual(v[boolIndices].shape, v[uint8Indices].shape)
+            self.assertEqual(v[boolIndices], v[uint8Indices])
             self.assertEqual(
                 v[boolIndices], tensor([True], dtype=torch.bool, device=device)
             )
-            self.assertEqual(len(w), 1)
+            self.assertEqual(len(w), 2)
 
     def test_bool_indices_accumulate(self, device):
         mask = torch.zeros(size=(10,), dtype=torch.bool, device=device)
@@ -887,10 +885,9 @@ class TestIndexing(TestCase):
         v = torch.randn(5, 7, 3, device=device)
         mask = torch.ByteTensor([1, 0, 1, 1, 0]).to(device)
         with warnings.catch_warnings(record=True) as w:
-            res = v[mask]
-            self.assertEqual(res.shape, (3, 7, 3))
-            self.assertEqual(res, torch.stack([v[0], v[2], v[3]]))
-            self.assertEqual(len(w), 1)
+            self.assertEqual(v[mask].shape, (3, 7, 3))
+            self.assertEqual(v[mask], torch.stack([v[0], v[2], v[3]]))
+            self.assertEqual(len(w), 2)
 
         v = torch.tensor([1.0], device=device)
         self.assertEqual(v[v == 0], torch.tensor([], device=device))

@MatrixPlayer
Copy link
Contributor Author

This diff doesn't seem right

diff --git b/test/test_indexing.py a/test/test_indexing.py
index f34fa4c5669..21c4624e23b 100644
--- b/test/test_indexing.py
+++ a/test/test_indexing.py
@@ -861,14 +861,12 @@ class TestIndexing(TestCase):
         )
         uint8Indices = torch.tensor([1, 0, 0], dtype=torch.uint8, device=device)
         with warnings.catch_warnings(record=True) as w:
-            v1 = v[boolIndices]
-            v2 = v[uint8Indices]
-            self.assertEqual(v1.shape, v2.shape)
-            self.assertEqual(v1, v2)
+            self.assertEqual(v[boolIndices].shape, v[uint8Indices].shape)
+            self.assertEqual(v[boolIndices], v[uint8Indices])
             self.assertEqual(
                 v[boolIndices], tensor([True], dtype=torch.bool, device=device)
             )
-            self.assertEqual(len(w), 1)
+            self.assertEqual(len(w), 2)
 
     def test_bool_indices_accumulate(self, device):
         mask = torch.zeros(size=(10,), dtype=torch.bool, device=device)
@@ -887,10 +885,9 @@ class TestIndexing(TestCase):
         v = torch.randn(5, 7, 3, device=device)
         mask = torch.ByteTensor([1, 0, 1, 1, 0]).to(device)
         with warnings.catch_warnings(record=True) as w:
-            res = v[mask]
-            self.assertEqual(res.shape, (3, 7, 3))
-            self.assertEqual(res, torch.stack([v[0], v[2], v[3]]))
-            self.assertEqual(len(w), 1)
+            self.assertEqual(v[mask].shape, (3, 7, 3))
+            self.assertEqual(v[mask], torch.stack([v[0], v[2], v[3]]))
+            self.assertEqual(len(w), 2)
 
         v = torch.tensor([1.0], device=device)
         self.assertEqual(v[v == 0], torch.tensor([], device=device))

fix this issue, please retrigger CI, thanks :-) @ezyang

@ezyang
Copy link
Contributor

ezyang commented Apr 29, 2024

You are still inlining the masking calls, whereas previously the test ran the mask call once and then tested the result

diff --git b/test/test_indexing.py a/test/test_indexing.py
index f34fa4c5669..9b09ad06a8b 100644
--- b/test/test_indexing.py
+++ a/test/test_indexing.py
@@ -861,10 +861,8 @@ class TestIndexing(TestCase):
         )
         uint8Indices = torch.tensor([1, 0, 0], dtype=torch.uint8, device=device)
         with warnings.catch_warnings(record=True) as w:
-            v1 = v[boolIndices]
-            v2 = v[uint8Indices]
-            self.assertEqual(v1.shape, v2.shape)
-            self.assertEqual(v1, v2)
+            self.assertEqual(v[boolIndices].shape, v[uint8Indices].shape)
+            self.assertEqual(v[boolIndices], v[uint8Indices])
             self.assertEqual(
                 v[boolIndices], tensor([True], dtype=torch.bool, device=device)
             )
@@ -887,9 +885,8 @@ class TestIndexing(TestCase):
         v = torch.randn(5, 7, 3, device=device)
         mask = torch.ByteTensor([1, 0, 1, 1, 0]).to(device)
         with warnings.catch_warnings(record=True) as w:
-            res = v[mask]
-            self.assertEqual(res.shape, (3, 7, 3))
-            self.assertEqual(res, torch.stack([v[0], v[2], v[3]]))
+            self.assertEqual(v[mask].shape, (3, 7, 3))
+            self.assertEqual(v[mask], torch.stack([v[0], v[2], v[3]]))
             self.assertEqual(len(w), 1)
 
         v = torch.tensor([1.0], device=device)

I'm sure it also works to test it this way, but it's the principle of the thing (lints should not change code meaning)

@MatrixPlayer
Copy link
Contributor Author

I'm not sure why this code snippet is inlined, anyway, revert these changes

@cpuhrsch cpuhrsch requested a review from ezyang April 30, 2024 19:46
@cpuhrsch cpuhrsch added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Apr 30, 2024
@ezyang
Copy link
Contributor

ezyang commented May 1, 2024

clean

@ezyang
Copy link
Contributor

ezyang commented May 1, 2024

@pytorchbot merge

Copy link

pytorch-bot bot commented May 1, 2024

This PR needs to be approved by an authorized maintainer before merge.

@ezyang
Copy link
Contributor

ezyang commented May 1, 2024

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label May 1, 2024
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: 1 jobs have failed, first few of them are: docker-builds / docker-build (linux.12xlarge, pytorch-linux-focal-py3.8-clang10)

Details for Dev Infra team Raised by workflow job

@ezyang
Copy link
Contributor

ezyang commented May 1, 2024

@pytorchbot merge -f "spurious failure"

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use -f as last resort and instead consider -i/--ignore-current to continue the merge ignoring current failures. This will allow currently pending tests to finish and report signal before the merge.

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

pytorch-bot bot pushed a commit that referenced this pull request May 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ciflow/trunk Trigger trunk jobs on your pull request Merged open source topic: not user facing topic category triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants