Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

DWI Ghosting IQM #1263

Open
araikes opened this issue Apr 11, 2024 · 4 comments
Open

DWI Ghosting IQM #1263

araikes opened this issue Apr 11, 2024 · 4 comments

Comments

@araikes
Copy link
Contributor

araikes commented Apr 11, 2024

Moved this from #1216

We collect ex-vivo diffusion data on our mice and we have some older datasets from when I first started work in rodent imaging. It's 3D diffusion acquisitions and one of the challenges which we didn't know we were running into until later was cradle motion. So, in a full dataset (n = 140-216 directions), we have a variable number of directions that end up looking like this:
image

or even worse, this:
image

I don't really want to have drop entire acquisitions nor do I really want to spend the tedium of manually (and subjectively) identifying which volumes are the "bad" ones (e.g., how bad is too bad, visually? If the ghosting appears to be only outside the brain, is that ok or is any ghosting bad... those are the things I think about). Is there a plausible quantitative IQM (maybe that z-direction derivative referenced above) that might be useful in identifying volumes with this kind of artifact so that thresholding could be done to remove volumes above/below some study-defined threshold value?

Thoughts @oesteban, @arokem, or @mattcieslak?

Originally posted by @araikes in #1216 (comment)

@araikes
Copy link
Contributor Author

araikes commented Apr 11, 2024

Ok, so per @oesteban, GSR may be a suitable metric. I looked at the GSR code from the functional IQMs and it definitely looks like it would be applicable. I tested it by hand on one of my bad ghosting acquisitions (not the same one from above because I don't remember which one that was) and got:

GSRx =  0.078
GSRy = -0.03
GSRz = -0.03

So I think it's feasible as a gross screening method. Is there a way to get a value for each volume in the DWI series rather than over the whole acquisition?

Also: Is there a link to interpretation? Based on the calculation, I assume negative values are "good" (nominal ghosting) and positive values are "bad" (lots of ghosting). Value-wise, I think that makes sense because this one had L/R ghosting.

@arokem
Copy link
Collaborator

arokem commented Apr 11, 2024

My hunch, not based on empirical evidence, is that it is very hard to tell whether the ghosting is "only outside the brain", and ghosting generally implies that data is shifted from one part of the image to the other. That said, it's hard to say how this would affect the subsequent results.

Just brainstorming here, I would consider some strategy for finding a threshold or otherwise eliminating the ghosting that maximizes split-half reliability of modeling results (e.g., FA across voxels).

@araikes
Copy link
Contributor Author

araikes commented Apr 11, 2024

@arokem,

I totally agree with the fact that we can't detect what's "in the brain" vs. what's "not in the brain" and it's something I've been really struggling with dealing with. If you have thoughts on an implementation, I'm all ears because I'd really like to salvage as much of these datasets as possible. GSR at the per volume level seems to be a potential starting point for identifying volumes with "really bad ghosting".

For the image with the GSR values I posted above, I did a very quick attempt at getting the per-volume (I'm sure there's a better way) in the x-direction:

arr = np.empty(144)
for i in range(144):
     data2 = data[:,:,:,i]
     ghost = np.mean(data2[n2_mask == 1]) - np.mean(data2[n2_mask == 2])
     signal = np.median(data2[n2_mask == 0])
     arr[i] = float(ghost/signal)

which gave this:

array([0.0254461 , 0.03020592, 0.03000278, 0.01969056, 0.01907434,
       0.02176203, 0.02370235, 0.01676315, 0.11850427, 0.11312119,
       0.04393867, 0.03441763, 0.02951876, 0.02017713, 0.02938066,
       0.02988772, 0.09126616, 0.0295992 , 0.01870611, 0.04527032,
       0.03957932, 0.02753512, 0.02688671, 0.10180615, 0.18320737,
       0.04827525, 0.01880521, 0.01969013, 0.052327  , 0.07672231,
       0.02117063, 0.01684991, 0.07624979, 0.07034315, 0.05658643,
       0.17474081, 0.0548525 , 0.03316351, 0.01615111, 0.02948352,
       0.15099148, 0.04033957, 0.0306507 , 0.01865321, 0.02675823,
       0.11505509, 0.30417878, 0.22421478, 0.06688026, 0.03035348,
       0.01736125, 0.01908511, 0.1431348 , 0.03670145, 0.08066228,
       0.04330692, 0.04459517, 0.01413128, 0.10618342, 0.0730062 ,
       0.26070629, 0.25575932, 0.03214935, 0.03091979, **0.01702108**,
       0.03148308, 0.06716611, 0.0509797 , 0.14998302, 0.07697541,
       0.01405759, 0.02809134, 0.03336503, 0.02920538, 0.0338694 ,
       0.0306889 , 0.03184421, 0.03344237, 0.03116571, 0.03514677,
       0.0825516 , 0.05190704, 0.09209404, 0.0728952 , 0.03800996,
       0.03442305, 0.1391662 , 0.19264646, 0.24136403, 0.07826577,
       0.03456154, 0.04536549, 0.04157039, 0.05910262, 0.07773821,
       0.13723494, 0.12875498, 0.13355304, 0.04508204, 0.05597374,
       0.0516644 , 0.26739711, 0.05797101, 0.03449341, 0.06185913,
       0.12320926, 0.24786684, 0.13724162, 0.26722285, 0.03997966,
       0.03364658, 0.07983146, 0.18266194, 0.14100207, 0.07015941,
       0.03370744, 0.07588877, 0.10398191, 0.08624869, 0.13776741,
       0.26963964, 0.24134225, 0.04695312, 0.07523764, 0.05156175,
       0.2051502 , 0.30553676, 0.39078798, 0.04542006, 0.04096449,
       0.0828809 , 0.15030167, 0.36044086, 0.0762827 , 0.16008316,
       0.06093574, 0.04864808, 0.09593662, 0.10126112, **0.2423741** ,
       0.06570209, 0.06367002, 0.03763876, 0.12693915])

I threw asterisks around a couple for comparison.

The smaller value corresponds to an image that looks like this):
image

while the larger bolded value looks like (note that this is a different b-value as well, hence the intensity change):
image

@arokem
Copy link
Collaborator

arokem commented Apr 11, 2024

One more thought is to try to use robust modeling approaches, such as RESTORE to find "bad volumes" as those that are often eliminated in RESTORE.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants