Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

RandomAffine and RandomElasticDeformation on differently sized and oriented images #734

Open
Spenhouet opened this issue Nov 10, 2021 · 12 comments
Labels
enhancement New feature or request

Comments

@Spenhouet
Copy link

馃殌 Feature
We have a full size image and then small FoV labels with a higher resolution.
Sadly the RandomAffine and RandomElasticDeformation augmentations do not run for this due to the size difference.
We can not upscale the labels since this would not fit the RAM.
We have to perform all augmentations on the individually sized images.

Motivation

For us this would be very helpful. We are mostly working with MONAI but frankly, the augmentations of torch.io are just better.
The respective MONAI augmentations also do not support this.
They do not throw an error like torch.io does (which in this case is better) but they output images which longer fit in terms of their affine (and probably also not in terms of the augmentation).

Would you need example files for that? Is this something to consider?

@Spenhouet Spenhouet added the enhancement New feature or request label Nov 10, 2021
@fepegar
Copy link
Owner

fepegar commented Nov 10, 2021

Hi, @Spenhouet. I think I understand the problem. It makes sense. Maybe we can add a flag to those transforms to not check for spatial consistency, as long as users know what they're doing. Also, we might need to be careful with the center of rotation as the image center is used by default at the moment. Anyway, could you please share one image and the corresponding label?

@Spenhouet
Copy link
Author

add a flag to those transforms to not check for spatial consistency

Just to make sure we are on the same page: The transforms would still need to be spatially consistent in the sense, that the transforms would need to be applied the same for image + labels.

Here the example image + labels:
example_images.zip

@fepegar
Copy link
Owner

fepegar commented Nov 11, 2021

This can be trivially solved for RandomAffine, but unfortunately not for RandomElasticDeformation. I can open a PR right away for the former, but I currently do not have the bandwidth to work on the latter. Is that very problematic?

@Spenhouet
Copy link
Author

@fepegar I'm happy with what ever you have time for 馃憤

I can split it of into a separate feature request?

@fepegar
Copy link
Owner

fepegar commented Nov 11, 2021

@fepegar I'm happy with what ever you have time for +1

Ok, I'll open the PR and ping you.

I can split it of into a separate feature request?

I think we don't need to. However, I really won't be able to tackle the elastic one. Contributions are of course welcome!

@Spenhouet
Copy link
Author

Spenhouet commented Nov 11, 2021

Thanks!

I might give the ElasticDeformation a short look but my time is currently also very limited.
Is it an issue, if this would just stay open then?

@fepegar
Copy link
Owner

fepegar commented Nov 11, 2021

I might give the ElasticDeformation a short look but my time is currently also very limited.

Good luck! The docs should help understand how the deformation field is generated and how it's applied.

Is it an issue, if this would just stay open then?

I think it's fine to keep it open.

fepegar added a commit that referenced this issue Nov 11, 2021
fepegar added a commit that referenced this issue Nov 11, 2021
@fepegar
Copy link
Owner

fepegar commented Nov 11, 2021

Patch for RandomAffine available in v0.18.64.

@romainVala
Copy link
Contributor

Hi
just out of curiosity, @Spenhouet how to you manage to match the output of the model (I guess same size as the T1 input) with the label that have different box size and resolution
At some point you must add a reslice rigth ?

@Spenhouet
Copy link
Author

Spenhouet commented Nov 12, 2021

@romainVala After the data augmentations (and some other operations) we perform a crop on the T1 (and respectively on the labels) based on the bounding box of the labels. We then also upscale the T1 and merge all labels (which still have different box sizes and affines). So in the end we have a small FoV for the image and the labels with matching shapes. But the augmentations are performed globally. Performing the augmentations only on the FoV would give unrealistic results.

Example output (augmented):
image
Same image, heavier augmentations:
image

@romainVala
Copy link
Contributor

interesting way to spare space with multiple label file covering only a subpart of the total FOV ...
I also struggle to get enough memory with 3D cnn and 15 tissue class segmentation, but I stay at the 1mm resolution, so it pass with input and target at the same FOV.

Taking your approach a making transform is difficult to handle
Ok for the Affine Transform is is fine, since it is the same transform that is applied whatever the exact FOV or voxel size. But for elastic deformation it may be difficult ...

this imply to get the deformation field, at the low resolution and interpolated it at the high resolution and then crop it to the exact FOV of the specific label at hand ...
No idea if this is even possible within sitk ?
Even if this is possible, this would complicate the RandomElasticTransform since it will need to know what is the main reference (the bigger FOV)

or do you see other workaround ?

(just for curiosity of what changes should be made)

@Spenhouet
Copy link
Author

It probably is necessary to perform the crop first and then perform the interpolation. Else you are going to get into the out of RAM territory.
For the other transforms we implemented to make everything work, we do not necessarily work on the images them self. Every image maps into a target space with some bounding box.
We calculate the bounding box which spanns als images in the target space.
To make this generally compatible, it probably makes sense to generate the grid in a low res space like 1mm isotropic.
And then project the individual grid points into the target space (cropped at the border of the individual images).
I'm uncertain when to perform the bsplines. The will probably be some interpolation error between the individual spaces.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants