Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running Quoala-T on multi-site datasets #37

Open
leotozzi88 opened this issue Apr 26, 2021 · 4 comments
Open

Running Quoala-T on multi-site datasets #37

leotozzi88 opened this issue Apr 26, 2021 · 4 comments

Comments

@leotozzi88
Copy link

Dear all,
Thank you for releasing this very useful software. I have a question on how to use Quoala-T on multi-site datasets. Is the Quoala-T score assigned for each individual scan, i.e. independent of all other scans in the dataset? Or is some sort of normalization across all datasets done on the input table before running the classifier?
The reason I ask is that I processed with Freesurfer a large number of T1s from different scanners/sites. I then used your scripts to collate all results in one table and then ran Quoala-T (with the Braintime model). What I am seeing is that basically an entire site gets marked as "exclude". Probably this site's sequence is quite different from the others, but it seems unlikely that all scans would be of poor quality (they are hundreds). Should I consider rerunning Quoala-T on each site separately or would this not change things?
Thank you.

@larawierenga
Copy link
Collaborator

larawierenga commented Apr 28, 2021 via email

@leotozzi88
Copy link
Author

Dear Lara,

Sorry for the late reply, I have attempted to use a bias field correction on the T1s and rerun FS, but the problem persists. This took a while.
Following your excellent suggestion I checked the holes and as you suspected, the dataset with "bad" data has much higher surface hole measures (rhSurfaceHoles and lhSurfaceHoles). Would you know what could be the cause for this and how to attempt a correction? I know I maybe should ask the FS mailing list about this, but maybe you have some ideas.
Thank you very much!

@leotozzi88
Copy link
Author

leotozzi88 commented May 20, 2021

Sorry for double posting, but I have been looking more closely at the log files and I found an interesting fact that maybe you could help me confirm. Freesurfer automatically corrects topology defects as part of recon-all using mris_fix_topology. So the defects in aseg.stats are "holes BEFORE fixing". See here an example for one of my subjects that was assigned a very low score by Quoala-T:

# Measure lhSurfaceHoles, lhSurfaceHoles, Number of defect holes in lh surfaces prior to fixing, 81, unitless
# Measure rhSurfaceHoles, rhSurfaceHoles, Number of defect holes in rh surfaces prior to fixing, 90, unitless

And in the file that your R script extracts which goes into the classifier I see for this subject:
lhSurfaceHoles=81
rhSurfaceHoles=90

But these are holes BEFORE fixing. In your original paper, did you use the holes before fixing or after fixing (probably 0)? Because this might be what is messing with the classification.
Thank you!

@larawierenga
Copy link
Collaborator

larawierenga commented May 20, 2021 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants