Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow controlling intensity of RandomMotion transform #765

Open
dzenanz opened this issue Dec 1, 2021 · 16 comments
Open

Allow controlling intensity of RandomMotion transform #765

dzenanz opened this issue Dec 1, 2021 · 16 comments
Labels
enhancement New feature or request

Comments

@dzenanz
Copy link

dzenanz commented Dec 1, 2021

馃殌 Feature

Motivation

Intensity of RandomMotion transform seems to mostly depend on the "time" of the motion. And while RandomGhosting transform exposes intensity parameter, RandomMotion does not expose times parameter, and does not have intensity.

Pitch

Either provide intensity parameter, or allow setting the range of times parameter used internally.

Alternatives

Save the image before the transform is applied, then "manually" blend the transformed one into the original one with custom weight, thus emulating intensity.

Additional context

I am trying to augment training by creating bad images for an image quality estimator, because most images in my training set are good. I would like to control the degree of corruption, e.g. to have control whether I produce an image with rating of 1/10 or 4/10.

@dzenanz dzenanz added the enhancement New feature or request label Dec 1, 2021
@romainVala
Copy link
Contributor

Well, controlling the Motion Artefact Severity is a difficult problem, because the exact same motion (let say a short transition of x mm) will induce very different artefact depending when this motion is occurring
In the center of the kspace (ie in the middle of your motion time course) the changes will be the worst compare to the begining.
Is it what you mean with controlling the time ?
but I do not see how to control it, (if you have more than 2 displacements ... you want to control the time for each displacement ?. But even though the relation will not be obvious

About Intensity of motion transform, the angle and the translation are directly related to it :
small motion (small translation and angle) will produce less artefacted image, (but only if you compare motion with the same timing ... so again not easy )

I am currently working on quantification of motion artefact severity, and having an estimation from the motion time course would be nice but I did not find a simple way yet ...

May be a better alternative, is to compute a difference metric (L1, L2 NCC ...) between the image before and after the motion artefact and used that metric to approximate the artefact severity

@dzenanz
Copy link
Author

dzenanz commented Dec 2, 2021

This answer itself is useful too. I guess my formula for converting corruption into 0-10 scale is generally OK. It might need some fine-tuning.

@romainVala
Copy link
Contributor

I do not agree, strongest artefact appears at the kspace center, so in the middle ...
may be min(time, 1-time)
but what I do not understand is how do you account for the number of change (len(time)) or num_transforms in RandomMotion

and actually it is not the motion onset that is important, but the motion duration ... (so the differenc time[i+1] - time[i] ...

@dzenanz
Copy link
Author

dzenanz commented Dec 2, 2021

I thought I would only have one motion, instead of multiple:
motion = CustomMotion(p=0.2, degrees=5.0, translation=5.0, num_transforms=1)
in order to simplify my life. I guess I was misunderstanding how motion simulation works.

I think I am satisfied with how I handle artificial ghosting (ghosting = CustomGhosting(p=0.3, intensity=(0.2, 0.8)). What would be the most similar way to handle motion?

@romainVala
Copy link
Contributor

ok, I see with only one motion (num_transforms=1) then I would still take
min(time,1-time) *2
so you get the maximum in the middle

Unfortunately motion can not be made similar to other artefact ...

@dzenanz
Copy link
Author

dzenanz commented Dec 2, 2021

What happens if there is only 1 motion? Does it implicitly end on time=1?

So motion around t=0.5 has the greatest effect? How would the effect be quantified? For example is motion with time=[0.1, 0.2] twice less noticeable or five times less noticeable than motion with time=[0.45, 0.55] (assuming everything else being equal)? I cannot explore it well using the Slicer plugin like I can for Ghosting. Hence I ask for times to be exposed in a similar way to degrees and translation.

@romainVala
Copy link
Contributor

1 motion means one change so 2 positions are average [0 t] and [t 1] (2 motion 3 postions [0 t1] [t1 t2] [t2 1]..)

for the Slice plugin I don't know, (but the Motion transform already have the times as argument ...)

@fepegar
Copy link
Owner

fepegar commented Dec 2, 2021

I coded this transform a long time ago reading Richard Shaw's paper. My version is a bit simplified, but works. I am now away at a conference, but I'll try to add some explanations to the docs when I'm back.

For now, maybe you can just use a convex combination of the original image and the transformed one:

import torch
import torchio as tio


class MyRandomMotion(tio.RandomMotion):
    def __init__(self, *, intensity, **kwargs):
        self.intensity = intensity
        super().__init__(**kwargs)

    def apply_transform(self, subject):
        transformed = super().apply_transform(subject)
        for image_name in self.get_images_dict(subject):
            original = subject[image_name]
            new = transformed[image_name]
            alpha = self.intensity
            composite_data = new.data * alpha + original.data * (1 - alpha)
            transformed[image_name].set_data(composite_data)
        return transformed


fpg = tio.datasets.FPG()
seed = 42

transform = MyRandomMotion(intensity=0)
torch.manual_seed(seed)
transform(fpg).t1.plot()

transform = MyRandomMotion(intensity=1)
torch.manual_seed(seed)
transform(fpg).t1.plot()

Figure_1

Figure_2

@fepegar
Copy link
Owner

fepegar commented Dec 2, 2021

If you like this approach, we can add this behavior to RandomMotion.

@romainVala
Copy link
Contributor

@fepegar would it be possible to add the Motion transform in Slicer ? (same for the other one, not only the RamdomTransform )

@dzenanz
Copy link
Author

dzenanz commented Dec 3, 2021

Adding alpha-blending is a simple and effective way of controlling intensity. And its place is in the Motion transform, so the user only needs to pass the right parameter, and the right range of parameters to RandomMotion.

Adding the non-random transforms to Slicer plugin would be useful for exploring the effects of parameters.

@dzenanz
Copy link
Author

dzenanz commented Dec 6, 2021

Full results of my initial attempt to use ghosting and motion are now in:
OpenImaging/miqa#27 (comment)
Quite significant! I hope I will be able to do even better with using more augmentation transforms, more formula tuning, and more general experimentation with TorchIO.

@fepegar
Copy link
Owner

fepegar commented Dec 6, 2021

Awesome 馃挴

Happy to help if needed. I'll add the intensity kwarg soon.

@romainVala
Copy link
Contributor

I not a big fan of this intenisty kwarg because it is not realist regard to the MRI acquisition process
but ok, it is easy to add, and may be it cant still be usefull ...

@fepegar would it be easy to add Motion transform in the Slicer pluggin ? (this would answer the initial need of exploration) and more generally it may be interesting for other transform too (ie not the Radom version)

About motion, @dzenanz be aware that this tranformation can also induce some misalignment with the original volume, so depending on your application it may be a problem or not ... (what is your application ?)

@dzenanz
Copy link
Author

dzenanz commented Dec 7, 2021

My application is image quality assessment. Or to rephrase: draw attention to images which are potentially low quality. It is used for training augmentation, so it should not be a problem.

@fepegar
Copy link
Owner

fepegar commented Dec 7, 2021

@fepegar would it be easy to add Motion transform in the Slicer pluggin ? (this would answer the initial need of exploration) and more generally it may be interesting for other transform too (ie not the Radom version)

It would be easy, yes. But it would take a bit of time, which I don't really have now. Feel free to open a PR!

dzenanz added a commit to dzenanz/miqa that referenced this issue Dec 15, 2021
dzenanz added a commit to dzenanz/miqa that referenced this issue Dec 16, 2021
dzenanz added a commit to dzenanz/miqa that referenced this issue Dec 27, 2021
dzenanz added a commit to dzenanz/miqa that referenced this issue Dec 27, 2021
annehaley pushed a commit to OpenImaging/miqa that referenced this issue Jun 10, 2022
zachmullen pushed a commit to OpenImaging/miqa that referenced this issue Dec 19, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants