Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Decompose, and classify components from, FIT-T2* and FIT-S0 #1023

Open
tsalo opened this issue Feb 9, 2024 · 10 comments
Open

Decompose, and classify components from, FIT-T2* and FIT-S0 #1023

tsalo opened this issue Feb 9, 2024 · 10 comments
Labels
decomposition issues related to decomposition methods discussion issues that still need to be discussed enhancement issues describing possible enhancements to the project question issues detailing questions about the project or its direction

Comments

@tsalo
Copy link
Member

tsalo commented Feb 9, 2024

Summary

I was just thinking that we could maybe get more purely T2*- or S0-based components if we decomposed the FIT-T2* and FIT-S0 images. I realize that FIT-T2* and FIT-S0 are not typically useful on their own, given how noisy they are with so few echoes, but perhaps we can use those components as initial estimates for a standard optcom-based decomposition?

Has anyone tried this? @handwerkerd @dowdlelt?

@tsalo tsalo added enhancement issues describing possible enhancements to the project question issues detailing questions about the project or its direction discussion issues that still need to be discussed decomposition issues related to decomposition methods labels Feb 9, 2024
@dowdlelt
Copy link
Collaborator

dowdlelt commented Feb 9, 2024

I've actually thought about extracting components from the S0 time series, treating it like a noise pool in the glmdenoise sense.

Well actually, it was more like, s0 can be a noise pool, but t2* voxels with bad fits to a model (task fmri) should also be a noise pool, and in that way you find noise sources like motion and inflow respectively.

Never did anything with this, but I think it's a reasonable alternative.

@tsalo
Copy link
Member Author

tsalo commented Feb 9, 2024

I'm glad it sounds reasonable! Definitely curious about what you mean by T2* voxels with bad fits. Do you mean accounting for the nonlinear model fit somehow? Would T2* fits be worse on task data for some reason?

EDIT: Never mind, I looked up what you meant by "noise pool" and I get what you mean now.

@tsalo
Copy link
Member Author

tsalo commented Feb 10, 2024

One thing I consistently forget is that you can't provide an initial mixing matrix to FastICA (so annoying!), so my idea of using the T2*/S0 components as priors/starting points for ICA won't work. Still, maybe we could just use them for the decision tree? Or try out a different source separation method?

EDIT: Independent vector analysis supports multiple dependent datasets, so conceivably we could try it on the T2*/S0/optimally-combined set (i.e., as an array with the shape [n_voxels, 3, n_volumes]).

@tsalo
Copy link
Member Author

tsalo commented Feb 10, 2024

Just based on some initial testing, ICA components from the 4D T2* have surprisingly low Kappa values, and often have Rho values higher than the Kappa values. ICA components from the 4D S0 exhibit similar patterns. Also, I don't see components showing things like linear trends that often explain most of the variance, so I'm a little confused.

@tsalo
Copy link
Member Author

tsalo commented Feb 24, 2024

I wonder if concatenating the S0 and T2* time series spatially could help distinguish TE-independent and TE-dependent components.

@handwerkerd
Copy link
Member

If I'm understanding some of these comments correctly, this is something I've been thinking about for a while, ever since I heard Tülay Adali give a talk on independent vector analysis. The basic approach would be, instead of optimizing for spatial independence (i.e. ICA) optimize components to be primarily T2* or S0 weighted. I tried this in 2018 using simulated annealing. https://fim.nimh.nih.gov/sites/default/files/handwerker_multiechodenoising_ohbm2018_small.pdf Conceptually the method worked perfectly, but the contrast-to-noise for my data didn't improve. One reason it didn't improve CNR was that I was using a block design flashing checkerboard task and the only changes that mattered were the ones that affected the component that was centered on the primary visual cortex. I have a new dataset and with more diffuse activation and I'm planning to work on this approach again.

As for using the fit T2* and S0 plots, as inputs, I think an issue is that each of those are fits so that there might be variance from the total signal that is in both or neither. This could be solvable, but it might involve revising how we calculate those fit time series. Another approach is the MEICA method from the original publication. Instead of doing ICA on the optimally combined time series, 3 echoes were concatenated in space and ICA was done on that. I'm not sure why Prantik went away from that approach, but the calculation of metrics was likely simpler when fitting data to a single mixing matrix instead of identifying the differences within a mixing matrix with all echoes.

@tsalo
Copy link
Member Author

tsalo commented Feb 29, 2024

I was looking into IVA as well! I think maybe trying IVA on the S0, T2*, and optimally combined data might produce some interesting components.

The basic approach would be, instead of optimizing for spatial independence (i.e. ICA) optimize components to be primarily T2* or S0 weighted.

Yes, this is my dearest wish for tedana.

As for using the fit T2* and S0 plots, as inputs, I think an issue is that each of those are fits so that there might be variance from the total signal that is in both or neither.

I can see that being a problem, though I hope that what's left out of the model is primarily thermal noise.

Another approach is the MEICA method from the original publication. Instead of doing ICA on the optimally combined time series, 3 echoes were concatenated in space and ICA was done on that.

I don't really understand this approach. There's no weighting in the ICA to make the components particularly TE-dependent or independent, so what would be the effect on the components that get estimated?

I do agree about the simplicity of the model fitting though.

@handwerkerd
Copy link
Member

I can see that being a problem, though I hope that what's left out of the model is primarily thermal noise.

I think the error for a curve-fit to 3 values will include estimation and measurement error that goes beyond just thermal noise.

I don't really understand this approach. There's no weighting in the ICA to make the components particularly TE-dependent or independent, so what would be the effect on the components that get estimated?

From the original meica approach, the time series from the same voxel across the three echoes are nearly identical so, when time series from all three echoes are included in one ICA calculation, nearly identical weight maps appear for each echo, but the magnitude of the weights will either be constant (S0) or scale (T2*) across echoes. Instead of fitting the ICA components from the Optimally Combined weight maps to each echo to calculate kappa & rho, you start with a weight map that's already calculated for each echo as part of ICA. I suspect the problem is that the component maps across echoes are nearly identical rather than identical, but I'm not sure what pushed Prantik away from the approach used in his first publication. (Another issue is that ICA would run on >=3X the number of voxels which would require more memory and processing time.)

@tsalo
Copy link
Member Author

tsalo commented Mar 2, 2024

We did have the concatenated data method implemented via the --sourceTEs parameter, but removed it because it wouldn't work with MA-PCA and because we couldn't figure out why it was done. I dug up the relevant issues/PRs: #203 and #485. It might work better with robustica though?

@handwerkerd
Copy link
Member

I think it's a viable method and we did include parts of the method in the original meica port, but I think it would also require a distinct decision tree so it would involve both some coding work and some research and examination across data sets. I think the odds of an IVA-style method making a substantial improvement are higher. I'm supportive if someone else wants to focus on bringing this back in and seeing how it works in practice.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
decomposition issues related to decomposition methods discussion issues that still need to be discussed enhancement issues describing possible enhancements to the project question issues detailing questions about the project or its direction
Projects
None yet
Development

No branches or pull requests

3 participants