Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Anomaly with RandomGrayScale tests #2002

Open
ashnair1 opened this issue Apr 16, 2024 · 2 comments · May be fixed by #2008
Open

Anomaly with RandomGrayScale tests #2002

ashnair1 opened this issue Apr 16, 2024 · 2 comments · May be fixed by #2008
Labels
testing Continuous integration testing transforms Data augmentation transforms

Comments

@ashnair1
Copy link
Collaborator

ashnair1 commented Apr 16, 2024

Description

So I noticed something strange while running tests for grayscale. Passing sample through our AugmentationSequential is changing the value of sample["image"] to be equal to that of output["image"].

Steps to reproduce

In theory, the following change should allow the test to pass but it does not.

@pytest.mark.parametrize(
    "weights",
    [
        torch.tensor([1.0, 1.0, 1.0]),
        torch.tensor([0.299, 0.587, 0.114]),
        torch.tensor([1.0, 2.0, 3.0]),
    ],
)
def test_random_grayscale_sample(weights: Tensor, sample: dict[str, Tensor]) -> None:
    aug = AugmentationSequential(RandomGrayscale(weights, p=1, keepdim=True), data_keys=["image"])
+    from copy import deepcopy
+    tmp = deepcopy(sample)
    output = aug(sample)
-    assert output["image"].shape == sample["image"].shape
-    assert output["image"].sum() == sample["image"].sum()
+    assert output["image"].shape == tmp["image"].shape
+    assert output["image"].sum() == tmp["image"].sum()
    for i in range(1, 3):
        assert torch.allclose(output["image"][0], output["image"][i])

But this throws the following errors:

========================================================================= short test summary info ==========================================================================
FAILED tests/transforms/test_color.py::test_random_grayscale_sample[weights1] - assert tensor(985.9200) == tensor(1128.)
FAILED tests/transforms/test_color.py::test_random_grayscale_sample[weights2] - assert tensor(1384.) == tensor(1128.)
======================================================================= 2 failed, 4 passed in 2.35s ========================================================================

Version

0.6.0.dev0 (abceea0)

@adamjstewart adamjstewart added testing Continuous integration testing transforms Data augmentation transforms labels Apr 17, 2024
@adamjstewart
Copy link
Collaborator

Passing sample through our AugmentationSequential is changing the value of sample["image"] to be equal to that of output["image"].

This looks like a bug in our AugmentationSequential wrapper. The upstream Kornia implementation does not suffer from this bug. This can be tested with:

import kornia.augmentation as K
import torch
import torchgeo.transforms as T

image = torch.randn(1, 1, 3, 3)
aug_dict = K.AugmentationSequential(K.RandomHorizontalFlip(p=1.0), data_keys=None)
data = {"image": image}

print("\nKornia:")
print(data["image"])
out = aug_dict(data)
print(data["image"])
print(out["image"])

image = torch.randn(1, 1, 3, 3)
aug_dict = T.AugmentationSequential(K.RandomHorizontalFlip(p=1.0), data_keys=["image"])
data = {"image": image}

print("\nTorchGeo:")
print(data["image"])
out = aug_dict(data)
print(data["image"])
print(out["image"])

I wouldn't devote too much energy to fixing this. We should instead remove our implementation.

@adamjstewart
Copy link
Collaborator

In theory, the following change should allow the test to pass but it does not.

Actually, and fully realizing that I was the one who wrote these tests, I see no reason why these tests should pass. As an example, imagine an image with 3 channels. The first channel is white (255). All other channels are black (0). If we use weights=[0, 0, 1] or weights=[1, 0, 0], we will see completely different outputs (all black in the first case, same image in the second case). I actually think we should remove the test for having the same sum at the end.

@ashnair1 ashnair1 linked a pull request Apr 17, 2024 that will close this issue
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
testing Continuous integration testing transforms Data augmentation transforms
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants