Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Poor non-linear registration (SyN) for intra-participant T1 to fMRI EPI #1649

Open
sara110rm opened this issue Jan 11, 2024 · 1 comment
Open

Comments

@sara110rm
Copy link

sara110rm commented Jan 11, 2024

Hi

I'm new to ANTS (mostly use FSL) and have two different fMRI datasets for stroke patients. I need to register individuals T1 to EPI (example_func from FSL FEAT). Despite having applied a field map for FEAT, there is still some signal dropout and stretching in the output example_func.nii.gz. Because of those distortions, I would like to try including nonlinear registration to optimise for that. I have tried many permutations of antsRegistrationSyN.sh, including running through the list of troubleshooting e.g skull stripping both input images, running bias correction, padding (for Dataset 1), masks, and switching fixed and moving images but none have made any notable difference and the output is still bad.

For Dataset 1, SyN seems to be doing a particularly bad job at the boundary of brain and cerebellum. For Dataset 2, SyN is shifting ventricles and internal brain structures away from the midline despite the brain outline being reasonable, and despite both rigid and affine stages looking acceptable (albeit unable to account for the signal dropout etc that I was trying to improve). I don't know if the fundamental problem is the same for both datasets and there is an obvious issue here that I am missing? Or are these different problems? For Dataset 1, the FOV for EPI was slightly cut off so I wondered if that was the problem, hence trying with Dataset 2 which has whole brain coverage but still gives a bad registration. I suspect this is not a brain lesion problem.

All images look correctly aligned when viewing on ITK SNAP.

I would really appreciate if someone could test the attached data and see if they replicate the error. Dataset2 is the most obvious to try first.

Thanks very much,

Sara

Command line:
antsRegistrationSyN.sh -d 3 -f Dataset2_example_func_brain.nii.gz -m Dataset2_T1_brain.nii.gz -o Dataset2_ANTS_highres2fmri -t s

antsRegistration --verbose 1 --dimensionality 3 --float 0 --collapse-output-transforms 1 --output [ Dataset2_ANTS_highres2fmri,Dataset2_ANTS_highres2fmriWarped.nii.gz,Dataset2_ANTS_highres2fmriInverseWarped.nii.gz ] --interpolation Linear --use-histogram-matching 0 --winsorize-image-intensities [ 0.005,0.995 ] --initial-moving-transform [ Dataset2_example_func_brain,Dataset2_T1_brain.nii.gz,1 ] --transform Rigid[ 0.1 ] --metric MI[ Dataset2_example_func_brain,Dataset2_T1_brain.nii.gz,1,32,Regular,0.25 ] --convergence [ 1000x500x250x100,1e-6,10 ] --shrink-factors 8x4x2x1 --smoothing-sigmas 3x2x1x0vox --transform Affine[ 0.1 ] --metric MI[ Dataset2_example_func_brain,Dataset2_T1_brain.nii.gz,1,32,Regular,0.25 ] --convergence [ 1000x500x250x100,1e-6,10 ] --shrink-factors 8x4x2x1 --smoothing-sigmas 3x2x1x0vox --transform SyN[ 0.1,3,0 ] --metric CC[ Dataset2_example_func_brain,Dataset2_T1_brain.nii.gz,1,4 ] --convergence [ 100x70x50x20,1e-6,10 ] --shrink-factors 8x4x2x1 --smoothing-sigmas 3x2x1x0vox

  • The full output printed to the terminal is uploaded separately as a text file

System information (please complete the following information)

  • OS: Ubuntu
  • OS version: 20.04.5 LTS
  • Type of system: Desktop
  • CPU architecture W-2265 CPU @ 3.50GHz × 24
    (but also tested on MacBook, OS Catalina 10.15.7)

ANTs version information

  • ANTs code version: ANTs Version: 2.5.0.post18-g0ea8e53
  • ANTs installation type: Compiled from source

Data
Dataset1.zip
Dataset2.zip

@cookpa
Copy link
Member

cookpa commented Jan 17, 2024

I think skull-stripping (as long as it's accurate) and using the BOLD images as the reference are good things, but because of the difference in resolution, some parameters that depend on the fixed image need to change from their defaults.

Two quite different problems here but I have some ideas that might apply to both.

  1. Start from the padded BOLD image as the fixed image.

  2. Because the images are intra-subject, I would remove the --initial-moving-transform [ Dataset2_example_func_brain,Dataset2_T1_brain.nii.gz,1 ]. This part aligns images by their center off mass, which works well for intersubject images of the same modality and FOV, but in this case the physical space of the two images is probably a better starting point. But since you said the rigid looks OK, this probably isn't a cause of major failures.

  3. Try to fit the transform model to what you know about the distortions. If the distortions are along one axis due to EPI effects, you can restrict the deformation with the -g option. Also, check if the affine stage is really helping. If there's not an affine component to the underlying distortion, optimizing an affine transform might make things worse, particularly if there's differences in FOV.

  4. Don't downsample so much. Your reference image is 3mm in Dataset1 and 2.4 in Dataset2. Shrinking by a factor of 8 may make the data too coarse grained to be useful. I'd probably go no more than 3, or 2 for 3mm data. You might also want to smooth less, you can specify eg -f 2x1 -s 2x1mm to smooth the T1w by 2mm rather than 2 * 3mm voxels.

  5. You can apply masks to each stage, you can use -x none for the global stages then use your BOLD brain mask in the deformable stage only. It may help to dilate it by a few voxels to capture the edges of the brain.

  6. If you're getting too much deformation, you can apply regularization, eg -t SyN[ 0.2, 3,1 ]. If you're not getting enough, you can also boost step size independently with -t SyN[ 0.2, 3, 0].

Note also that the registration itself won't compensate for the intensity changes introduced by the stretching or compression of signal, so you'd need to deal with that separately.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants