Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

support compact/encoded RLE masks in COCO dataset wrapper #8351

Open
rizavelioglu opened this issue Mar 22, 2024 · 5 comments 路 May be fixed by #8387
Open

support compact/encoded RLE masks in COCO dataset wrapper #8351

rizavelioglu opened this issue Mar 22, 2024 · 5 comments 路 May be fixed by #8387

Comments

@rizavelioglu
Copy link
Contributor

rizavelioglu commented Mar 22, 2024

馃悰 Describe the bug

Issue

Following the tutorial Getting started with transforms v2, I have successfully constructed my custom COCO-style dataset using the provided example code:

from torchvision.datasets import CocoDetection, wrap_dataset_for_transforms_v2

dataset = CocoDetection(root=..., annFile=..., transforms=...)
dataset = wrap_dataset_for_transforms_v2(dataset)  # Now the dataset returns TVTensors!

This works as expected. However, I have encountered an issue regarding the expected format for segmentation masks inside the annotation file(annFile) of the custom dataset. Currently, the implementation only accepts segmentation masks in the form of polygons or uncompressed RLE, while ignoring the compressed RLE format.

Detailed Explanation

In COCO there is an attribute iscrowd within the annotations which can either be 0 (instance represents a single object) or 1 (instance represents a collection of objects, e.g. a crowd of people). Depending on the iscrowd value, a different segmentation format is used: polygons (iscrowd=0) or uncompressed RLE (iscrowd=1) (see official COCO docs). The compressed RLE is simply the encoded, or in other words, compressed version of the uncompressed RLE. See examples below:

# `iscrowd` = 0 (polygon): [[x1, y1, x2, y2],[x1,y1,x2,y2],...], where x, y are the coordinates of vertices
"segmentation"=[
    [
        289.74,
        443.39,
        289.74,
        443.39,
    ],
    [
        289.74,
        443.39,
        440.39,
        289.74,
    ]
]

# `iscrowd` = 1 (uncompressed RLE):
"segmentation": {
    "counts": [
        272,
        2,
        4,
        4
    ],
    "size": [
        240,
        320
    ]
}

# The compact/encoded/compressed RLE format: {"size", [height, width], "counts": str}
"segmentation":{
    "size":[
        2400,
        2400
    ],
    "counts":"PQRQ2[1\\Y2f0gNVNRhMg2"
}

And, the compressed RLE is derived via (see COCO):

# Convert polygon, bbox, and uncompressed RLE to encoded RLE mask.
compressed_RLE = pycocotools.mask.frPyObjects(uncompressed_RLE, *canvas_size)

Although many datasets have their masks in either polygons or uncompressed RLE, it would be beneficial to support other datasets (including mine) whose masks follow the compressed RLE format, which is also a COCO-supported format, in my opinion.

Reproduce Issue

The issue lies in the coco_dectection_wrapper_factory function (also a typo in function name here):

@WRAPPER_FACTORIES.register(datasets.CocoDetection)
def coco_dectection_wrapper_factory(dataset, target_keys):

which defines the segmentation_to_mask function that first converts polygon and uncompressed RLE to compressed RLE, then returns the decoded binary mask:

def segmentation_to_mask(segmentation, *, canvas_size):
from pycocotools import mask
segmentation = (
mask.frPyObjects(segmentation, *canvas_size)
if isinstance(segmentation, dict)
else mask.merge(mask.frPyObjects(segmentation, *canvas_size))
)
return torch.from_numpy(mask.decode(segmentation))

So, here is a code snippet to reproduce the issue:

# A mask in compressed RLE format
segmentation = {
    'size': [200, 200],
    'counts': '\\`>>c5<F9I:F<Eg0YOo0QO8H7J3L5L2N2N2O1N2O1N2O1O1O001O000000O10001N100O2O0O2N2N1O2M3N3M2M3N3K5K5I7G9C>^Oa0E<F;G9F>@b\\>'
}

def segmentation_to_mask(segmentation, *, canvas_size):
    from pycocotools import mask

    segmentation = (
        mask.frPyObjects(segmentation, *canvas_size)
        if isinstance(segmentation, dict)
        else mask.merge(mask.frPyObjects(segmentation, *canvas_size))
    )
    return torch.from_numpy(mask.decode(segmentation))

segmentation_to_mask(segmentation, canvas_size=(200,200))

which throws:

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Cell In[1], line 16
      9     segmentation = (
     10         mask.frPyObjects(segmentation, *canvas_size)
     11         if isinstance(segmentation, dict)
     12         else mask.merge(mask.frPyObjects(segmentation, *canvas_size))
     13     )
     14     return torch.from_numpy(mask.decode(segmentation))
---> 16 segmentation_to_mask(segmentation, canvas_size=(200,200))

Cell In[1], line 10, in segmentation_to_mask(segmentation, canvas_size)
      6 def segmentation_to_mask(segmentation, *, canvas_size):
      7     from pycocotools import mask
      9     segmentation = (
---> 10         mask.frPyObjects(segmentation, *canvas_size)
     11         if isinstance(segmentation, dict)
     12         else mask.merge(mask.frPyObjects(segmentation, *canvas_size))
     13     )
     14     return torch.from_numpy(mask.decode(segmentation))

File pycocotools[/_mask.pyx:306](http://localhost:8888/_mask.pyx#line=305), in pycocotools._mask.frPyObjects()

File pycocotools[/_mask.pyx:279](http://localhost:8888/_mask.pyx#line=278), in pycocotools._mask.frUncompressedRLE()

ValueError: invalid literal for int() with base 10: '\\`>>c5<F9I:F<Eg0YOo0QO8H7J3L5L2N2N2O1N2O1N2O1O1O001O000000O10001N100O2O0O2N2N1O2M3N3M2M3N3K5K5I7G9C>^Oa0E<F;G9F>@b\\>'

Possible Fix

import torch
from pycocotools import mask
import matplotlib.pyplot as plt

segmentation = {
    'size': [200, 200],
    'counts': '\\`>>c5<F9I:F<Eg0YOo0QO8H7J3L5L2N2N2O1N2O1N2O1O1O001O000000O10001N100O2O0O2N2N1O2M3N3M2M3N3K5K5I7G9C>^Oa0E<F;G9F>@b\\>'
}

def segmentation_to_mask_2(segmentation, *, canvas_size):    
    try:
        segmentation = (
            mask.frPyObjects(segmentation, *canvas_size)
            if isinstance(segmentation, dict)
            else mask.merge(mask.frPyObjects(segmentation, *canvas_size))
        )
        return torch.from_numpy(mask.decode(segmentation))
    except:
        return torch.from_numpy(mask.decode(segmentation))
    else:
        print("Masks has to be in either one of the form: polygons, uncompressed RLE, or compressed RLE")


def segmentation_to_mask_3(segmentation, *, canvas_size):
    if isinstance(segmentation["counts"], str):
        # If segmentation is already in RLE format, no need to process further
        pass
    elif isinstance(segmentation, dict):
        # Convert uncompressed RLE to encoded RLE mask
        segmentation= mask.frPyObjects(segmentation, *canvas_size)
    else: 
        # Convert polygons to encoded RLE mask
        segmentation = mask.merge(mask.frPyObjects(segmentation, *canvas_size))

    return torch.from_numpy(mask.decode(segmentation))

# Create a figure and axes
fig, axes = plt.subplots(1, 3, figsize=(10, 5))

# Show actual mask
axes[0].imshow(mask.decode(segmentation))
axes[0].set_title('Real segmentation')
axes[0].axis('off')

# Show output of suggested fix
axes[1].imshow(segmentation_to_mask_2(segmentation, canvas_size=(200,200)))
axes[1].set_title('Output_2')
axes[1].axis('off')

# Show output of suggested fix
axes[2].imshow(segmentation_to_mask_3(segmentation, canvas_size=(200,200)))
axes[2].set_title('Output_3')
axes[2].axis('off')

# Adjust layout
plt.tight_layout()

# Show the images
plt.show()

I would be glad to open a PR.

Versions

Click to see
PyTorch version: 2.0.1+cu117 Is debug build: False CUDA used to build PyTorch: 11.7 ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.4 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.26.4
Libc version: glibc-2.35

Python version: 3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-6.5.0-21-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 11.8.89
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: Quadro RTX 5000
Nvidia driver version: 525.147.05
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 46 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 24
On-line CPU(s) list: 0-23
Vendor ID: GenuineIntel
Model name: Intel(R) Xeon(R) W-2265 CPU @ 3.50GHz
CPU family: 6
Model: 85
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 1
Stepping: 7
CPU max MHz: 4800,0000
CPU min MHz: 1200,0000
BogoMIPS: 6999.82
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 cdp_l3 invpcid_single intel_ppin ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb intel_pt avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req vnmi avx512_vnni md_clear flush_l1d arch_capabilities
Virtualization: VT-x
L1d cache: 384 KiB (12 instances)
L1i cache: 384 KiB (12 instances)
L2 cache: 12 MiB (12 instances)
L3 cache: 19,3 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-23
Vulnerability Gather data sampling: Mitigation; Microcode
Vulnerability Itlb multihit: KVM: Mitigation: VMX disabled
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Mitigation; Clear CPU buffers; SMT vulnerable
Vulnerability Retbleed: Mitigation; Enhanced IBRS
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Enhanced / Automatic IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Mitigation; TSX disabled

Versions of relevant libraries:
[pip3] gpytorch==1.11
[pip3] numpy==1.24.3
[pip3] torch==2.0.1
[pip3] torchaudio==2.0.2
[pip3] torchmetrics==1.1.0
[pip3] torchvision==0.15.2
[pip3] triton==2.0.0
[conda] blas 1.0 mkl
[conda] gpytorch 1.11 pypi_0 pypi
[conda] mkl 2023.1.0 h6d00ec8_46342
[conda] mkl-service 2.4.0 py310h5eee18b_1
[conda] mkl_fft 1.3.6 py310h1128e8f_1
[conda] mkl_random 1.2.2 py310h1128e8f_1
[conda] numpy 1.24.3 py310h5f9d8c6_1
[conda] numpy-base 1.24.3 py310hb5e798b_1
[conda] torch 2.0.1 pypi_0 pypi
[conda] torchaudio 2.0.2 pypi_0 pypi
[conda] torchmetrics 1.1.0 pypi_0 pypi
[conda] torchvision 0.15.2 pypi_0 pypi
[conda] triton 2.0.0 pypi_0 pypi

@rizavelioglu
Copy link
Contributor Author

@NicolasHug kindly pinging this

@NicolasHug
Copy link
Member

Hi @rizavelioglu

Supporting encoded RLE masks sounds reasonable, but before we proceed, can you show me how you would like to define and wrap your custom dataset? Does it directly inherit from torchvision.datasets.CocoDetection?

@rizavelioglu
Copy link
Contributor Author

Sure!
Yes. Since my dataset is in COCO format, I construct it via torchvision.datasets.CocoDetection:

from torchvision.datasets import CocoDetection, wrap_dataset_for_transforms_v2

# Load dataset in COCO format
dataset = CocoDetection(img_dir, annFile)

# Convert target into required types
dataset = wrap_dataset_for_transforms_v2(dataset)

# Construct dataloader
data_loader = torch.utils.data.DataLoader(dataset, batch_size)
...

@NicolasHug
Copy link
Member

Thank you @rizavelioglu - I have opened #8387 to support encoded RLE segmentation. Do you mind taking a look at the PR and confirm that this is indeed what you needed? Thanks!

@rizavelioglu
Copy link
Contributor Author

Thank you @NicolasHug - I confirm that this is indeed what was needed. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants