Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Detrending & tedana (or detrended variance explained) #1054

Open
handwerkerd opened this issue Feb 29, 2024 · 9 comments
Open

Detrending & tedana (or detrended variance explained) #1054

handwerkerd opened this issue Feb 29, 2024 · 9 comments
Labels
effort: medium Theoretically <40h total work enhancement issues describing possible enhancements to the project impact: medium Improves code/documentation functionality for some users

Comments

@handwerkerd
Copy link
Member

Summary

I was talking with @afni-rickr and he suggested detrending and possibly regressing out motion parameters before tedana. This would reduce the amount of variance tedana would need to model and potentially make it perform better.

Additional Detail

In thinking this over, I think regressing motion would be problematic because we'd need to regress from all echoes if motion artifacts didn't all follow the regressors, it might not actually save degrees-of-freedom or improve other results. I think there's a better case for detrending. I'm not too worried about high variance trends making tedana perform worse, but they do make it harder to interpret total variance explained and accepted/rejected variance explained since the magnitude of the linear drift dominates total variance explained. This make it hard to see whether rejected components are 30% of the meaningful total variance in one population vs 50% in another.

We might want to test of tedana results are substantively different if we detrend first. If not, one way to address the above issue would be to add a new metric detrended variance explained. I'd need to think through the math, but we'd detrend each component's time series and scale each components "variance explained" by how much detrending reduced variance. IF we want to get extra-fancy, we could show the variance explained pie chart and component time series with or without detrending.

@handwerkerd handwerkerd added enhancement issues describing possible enhancements to the project effort: medium Theoretically <40h total work impact: medium Improves code/documentation functionality for some users labels Feb 29, 2024
@CesarCaballeroGaudes
Copy link
Contributor

IMHO, detrending prior to tedana might be helpful because IC explaining high % of variance do often exhibit low frequency trends very clearly. In our datasets we usually find (two) related components with opposite trends. In contrast, I don't think regressing out realignment parameters prior to tedana would be advantageous. If so, I'd advocate for the use of the realignment parameters in the decision tree as @eurunuela has aimed to implement. A bigger movement-related tree is easier to see in the brain forest !!

@handwerkerd
Copy link
Member Author

My top priorities are to get realignment parameters into the decision tree (#1021) and get a more stable component estimation (likely #1013). Unless someone else gets to it first, running tedana on detrended data should be fairly easy. One would need to run polynomial detrending on all echoes, but keep the mean. Without any code edits, tedana could be run with and without this detrending to see how it alters the eventual denoised time series.

@afni-rickr
Copy link

Note that detrending or other regression should happen in the context of censoring, at least if time gets squeezed. If modularity is important, censoring with spike regressors might be preferable to time squeezing.
For detrending, I would think that would still distort the relationship between the echoes. So it would seem like detrending or other regression should happen to the OC input to ICA, rather than to the individual echoes. Though again, I am not sure what happens after that.

@CesarCaballeroGaudes
Copy link
Contributor

I agree that detrending should be done to the OC input

@handwerkerd
Copy link
Member Author

To make the metric fits work, you'd need to detrend the separate echoes (but retain the mean). To calculate kappa & rho, the component time series are fit to each echo's data. If the OC is detrended, but the echos aren't then those calculations would definitely break down. That said, we might be losing or corrupting some echo-dependent information if we separately detrend each echo.

One intermediate option would be to fit the polynomial detrending regressors either the OC data or all echoes, and then fit a single detrending regressor to all echoes. That is, for the OC data, a voxel's detrending regressors are modeled by y=1.5x^3 + 0.8x^2 + 2.1x + 5. Then, for that voxel, we'd fit each echo to y=A*(1.5x^3 + 0.8x^2 + 2.1x)+C rather than letting the relationships between the coefficients vary.

I'm not sure if this would actually matter or what the effects of any detrending approach would be, but we might get a better sense when someone tries it and compares the results.

@afni-rickr
Copy link

Does the relationship between the echoes still matter after OC is computed? How BOLD-like a time series looks could be evaluated before OC (and therefore before detrending, etc). At that point, wouldn't it just be a question of variance explained? And if so, the relationship would no longer be needed and the echoes could be detrended.
As a separate question, does that mean you evaluate explained variance from the echoes separately? I figured that would come from OC. I guess I just don't know exactly what metrics are being computed, and on what. Thanks.

@handwerkerd
Copy link
Member Author

ICA is the limiting step. The ICA component times series need to be map-able back onto the original echoes. If the OC data that go into the ICA step are detrended, but the individual echoes are not, then this breaks down.

That said, this goes back to my opening comment. If the goal is to calculate variance explained excluding linear trends, that should be relatively easy. If the goal is to run ICA on detrended data, then there are more complex issues to think through.

@afni-rickr
Copy link

My somewhat ignorant inclination would be to run ICA on the detrended data. Echo/BOLD relationships could be computed separately, even if it is necessary to have both original and detrended echoes. But it seems that the ICA could be made to be far more sensitive to properties of interest if it were not "distracted" by components such as trends or censored spikes.

@tsalo
Copy link
Member

tsalo commented Apr 10, 2024

The --gscontrol gsr approach estimates global signal from the OC data and then removes it from each echo, so we do have code that could be repurposed for detrending.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
effort: medium Theoretically <40h total work enhancement issues describing possible enhancements to the project impact: medium Improves code/documentation functionality for some users
Projects
None yet
Development

No branches or pull requests

4 participants