Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ENH] Adapt FirstLevelModel to work with surface data objects from new API #4126

Merged
merged 46 commits into from
Jun 4, 2024
Merged
Show file tree
Hide file tree
Changes from 34 commits
Commits
Show all changes
46 commits
Select commit Hold shift + click to select a range
1b785fa
Add masker structure
Nov 30, 2023
5459d6b
Define surfaceimage
Nov 30, 2023
006e258
Update
Nov 30, 2023
c4ddf21
Update docstring
Dec 6, 2023
0da09ae
Skip niimg checks if surface
Dec 6, 2023
b5a4de7
Fix code breaking tests
Dec 7, 2023
5a82233
Merge branch 'main' into flm_surfaceimage
Dec 7, 2023
4a1c25a
Add some tests
Dec 7, 2023
415fa4f
Add more tests
Dec 7, 2023
3928ece
Refactor tests
Dec 7, 2023
a51abfc
Replace parameters for fitted surface masker
Dec 8, 2023
66def1e
Add test
Dec 8, 2023
93230df
Remove breakpoint
Dec 8, 2023
8dbdd1d
Adapt compute contrast
Dec 8, 2023
7b5b130
Handle one hemisphere
Dec 11, 2023
429442d
Merge branch 'main' into flm_surfaceimage
Jan 8, 2024
99bf608
Refactor
Jan 8, 2024
50f780b
Resolve surface case for default mask
Jan 8, 2024
3028d55
Fix doctsring
Jan 8, 2024
a1f2112
Added GLM example
bthirion Jan 5, 2024
1698323
Remove default arg
Jan 9, 2024
f01b667
Fix conflict
Jan 9, 2024
8167603
Merge branch 'main' into flm_surfaceimage
ymzayek Jan 9, 2024
bdf2f23
Formatting
Jan 9, 2024
5f80c63
Working report with no plots
Jan 9, 2024
ddc8ef1
Allow black on example
Jan 10, 2024
7d65d14
Revert "Working report with no plots"
Jan 11, 2024
4d2e782
NotImplementedError for reports
Jan 11, 2024
5c6f759
Move test
Jan 11, 2024
bcac629
Add test
Jan 12, 2024
3c7ae7f
Clarify doctsring
Jan 16, 2024
04a14cc
Refactor mask application
Jan 16, 2024
e08831d
Method name and docstring
Jan 16, 2024
6126ed8
Merge branch 'main' into flm_surfaceimage
Remi-Gau Jan 17, 2024
9f4d8d0
extract example as experimental
Remi-Gau Jan 26, 2024
6ba25f8
move example
Remi-Gau Jan 26, 2024
455887c
update bids example
Remi-Gau Jan 26, 2024
85d6e8a
clean up
Remi-Gau Jan 26, 2024
a5de9b3
Merge remote-tracking branch 'upstream/main' into pr/ymzayek/4126
Remi-Gau Jan 26, 2024
661843e
adapt fetcher
Remi-Gau Jan 26, 2024
a0138f0
use confounds
Remi-Gau Jan 26, 2024
f203e2c
Merge branch 'main' into flm_surfaceimage
Remi-Gau May 31, 2024
e259384
Merge branch 'main' into flm_surfaceimage
Remi-Gau Jun 3, 2024
a99bf90
fix test
Remi-Gau Jun 3, 2024
079bb38
fix examples
Remi-Gau Jun 3, 2024
2c0c193
refactor
Remi-Gau Jun 3, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
189 changes: 90 additions & 99 deletions examples/04_glm_first_level/plot_localizer_surface_analysis.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,7 @@
1. A sequence of :term:`fMRI` volumes is loaded.
2. :term:`fMRI` data are projected onto a reference cortical surface
(the FreeSurfer template, fsaverage).
3. A design matrix describing all the effects related to the data is computed.
4. A :term:`GLM` is applied to the dataset
3. A :term:`GLM` is applied to the dataset
(effect/covariance, then contrast estimation).

The result of the analysis are statistical maps that are defined on the brain
Expand Down Expand Up @@ -43,9 +42,9 @@
# %%
# Prepare the data.
# First, the volume-based :term:`fMRI` data.
from nilearn.datasets import fetch_localizer_first_level
from nilearn import datasets

data = fetch_localizer_first_level()
data = datasets.fetch_localizer_first_level()
fmri_img = data.epi_img

# %%
Expand All @@ -64,51 +63,46 @@
# We could use an individual :term:`mesh`,
# but we first resort to a standard :term:`mesh`,
# the so-called fsaverage5 template from the FreeSurfer software.
import nilearn

fsaverage = nilearn.datasets.fetch_surf_fsaverage()

# %%
# The projection function simply takes the :term:`fMRI` data and the mesh.
# Note that those correspond spatially, are they are both in :term:`MNI` space.
from nilearn import surface

texture = surface.vol_to_surf(fmri_img, fsaverage.pial_right)
from nilearn.experimental.surface import SurfaceImage, load_fsaverage
Remi-Gau marked this conversation as resolved.
Show resolved Hide resolved

fsaverage5 = load_fsaverage()
texture_left = surface.vol_to_surf(
fmri_img, fsaverage5["pial"]["left_hemisphere"]
)
texture_right = surface.vol_to_surf(
fmri_img, fsaverage5["pial"]["right_hemisphere"]
)
image = SurfaceImage(
mesh={
"lh": fsaverage5["pial"]["left_hemisphere"],
"rh": fsaverage5["pial"]["right_hemisphere"],
},
data={
"lh": texture_left.T,
"rh": texture_right.T,
},
)

# %%
# Perform first level analysis
# ----------------------------
#
# This involves computing the design matrix and fitting the model.
# We start by specifying the timing of :term:`fMRI` frames.
import numpy as np

n_scans = texture.shape[1]
frame_times = t_r * (np.arange(n_scans) + .5)

# %%
# Create the design matrix.
#
# We specify an :term:`HRF` model
# containing the Glover model and its time derivative
# The drift model is implicitly a cosine basis with a period cutoff at 128s.
from nilearn.glm.first_level import make_first_level_design_matrix
from nilearn.glm.first_level import FirstLevelModel

design_matrix = make_first_level_design_matrix(frame_times,
events=events,
hrf_model='glover + derivative'
)

# %%
# Setup and fit GLM.
#
# Note that the output consists in 2 variables: `labels` and `fit`.
# `labels` tags voxels according to noise autocorrelation.
# `estimates` contains the parameter estimates.
# We keep them for later :term:`contrast` computation.
from nilearn.glm.first_level import run_glm

labels, estimates = run_glm(texture.T, design_matrix.values)
glm = FirstLevelModel(
t_r,
slice_time_ref=slice_time_ref,
hrf_model="glover + derivative",
).fit(image, events)

# %%
# Estimate contrasts
Expand All @@ -117,36 +111,48 @@
#
# For practical purpose, we first generate an identity matrix whose size is
# the number of columns of the design matrix.
import numpy as np

design_matrix = glm.design_matrices_[0]
contrast_matrix = np.eye(design_matrix.shape[1])

# %%
# At first, we create basic contrasts.
basic_contrasts = dict([(column, contrast_matrix[i])
for i, column in enumerate(design_matrix.columns)])
basic_contrasts = dict(
[
(column, contrast_matrix[i])
for i, column in enumerate(design_matrix.columns)
]
)

# %%
# Next, we add some intermediate contrasts and
# one :term:`contrast` adding all conditions with some auditory parts.
basic_contrasts['audio'] = (
basic_contrasts['audio_left_hand_button_press']
+ basic_contrasts['audio_right_hand_button_press']
+ basic_contrasts['audio_computation']
+ basic_contrasts['sentence_listening'])
basic_contrasts["audio"] = (
basic_contrasts["audio_left_hand_button_press"]
+ basic_contrasts["audio_right_hand_button_press"]
+ basic_contrasts["audio_computation"]
+ basic_contrasts["sentence_listening"]
)

# one contrast adding all conditions involving instructions reading
basic_contrasts['visual'] = (
basic_contrasts['visual_left_hand_button_press']
+ basic_contrasts['visual_right_hand_button_press']
+ basic_contrasts['visual_computation']
+ basic_contrasts['sentence_reading'])
basic_contrasts["visual"] = (
basic_contrasts["visual_left_hand_button_press"]
+ basic_contrasts["visual_right_hand_button_press"]
+ basic_contrasts["visual_computation"]
+ basic_contrasts["sentence_reading"]
)

# one contrast adding all conditions involving computation
basic_contrasts['computation'] = (basic_contrasts['visual_computation']
+ basic_contrasts['audio_computation'])
basic_contrasts["computation"] = (
basic_contrasts["visual_computation"]
+ basic_contrasts["audio_computation"]
)

# one contrast adding all conditions involving sentences
basic_contrasts['sentences'] = (basic_contrasts['sentence_listening']
+ basic_contrasts['sentence_reading'])
basic_contrasts["sentences"] = (
basic_contrasts["sentence_listening"] + basic_contrasts["sentence_reading"]
)

# %%
# Finally, we create a dictionary of more relevant contrasts
Expand All @@ -163,68 +169,53 @@
# but we keep only 3 for simplicity.

contrasts = {
'left - right button press': (
basic_contrasts['audio_left_hand_button_press']
- basic_contrasts['audio_right_hand_button_press']
+ basic_contrasts['visual_left_hand_button_press']
- basic_contrasts['visual_right_hand_button_press']
"left - right button press": (
basic_contrasts["audio_left_hand_button_press"]
- basic_contrasts["audio_right_hand_button_press"]
+ basic_contrasts["visual_left_hand_button_press"]
- basic_contrasts["visual_right_hand_button_press"]
),
"audio - visual": basic_contrasts["audio"] - basic_contrasts["visual"],
"computation - sentences": (
basic_contrasts["computation"] - basic_contrasts["sentences"]
),
'audio - visual': basic_contrasts['audio'] - basic_contrasts['visual'],
'computation - sentences': (
basic_contrasts['computation']
- basic_contrasts['sentences']
)
}


# %%
# Let's estimate the contrasts by iterating over them.
from nilearn import plotting
from nilearn.glm.contrasts import compute_contrast

fsaverage = datasets.fetch_surf_fsaverage()

for index, (contrast_id, contrast_val) in enumerate(contrasts.items()):
print(f" Contrast {index + 1:1} out of {len(contrasts)}: "
f"{contrast_id}, right hemisphere")
print(
f" Contrast {index + 1:1} out of {len(contrasts)}: "
f"{contrast_id}, right hemisphere"
)
# compute contrast-related statistics
contrast = compute_contrast(labels, estimates, contrast_val,
stat_type='t')
# we present the Z-transform of the t map
z_score = contrast.z_score()
z_score = glm.compute_contrast(contrast_val, stat_type="t")

# we plot it on the surface, on the inflated fsaverage mesh,
# together with a suitable background to give an impression
# of the cortex folding.
plotting.plot_surf_stat_map(
fsaverage.infl_right, z_score, hemi='right',
title=contrast_id, colorbar=True,
threshold=3., bg_map=fsaverage.sulc_right)

# %%
# Analysing the left hemisphere
# -----------------------------
#
# Note that re-creating the above analysis for the left hemisphere requires
# little additional code!

# %%
# We project the :term:`fMRI` data to the mesh.
texture = surface.vol_to_surf(fmri_img, fsaverage.pial_left)

# %%
# Then we estimate the General Linear Model.
labels, estimates = run_glm(texture.T, design_matrix.values)

# %%
# Finally, we create contrast-specific maps and plot them.
for index, (contrast_id, contrast_val) in enumerate(contrasts.items()):
print(f" Contrast {index + 1:1} out of {len(contrasts)}: "
f"{contrast_id}, left hemisphere")
# compute contrasts
contrast = compute_contrast(labels, estimates, contrast_val,
stat_type='t')
z_score = contrast.z_score()
# plot the result
fsaverage.infl_right,
z_score.data["rh"],
hemi="right",
title=contrast_id,
colorbar=True,
threshold=3.0,
bg_map=fsaverage.sulc_right,
)
plotting.plot_surf_stat_map(
fsaverage.infl_left, z_score, hemi='left',
title=contrast_id, colorbar=True,
threshold=3., bg_map=fsaverage.sulc_left)
fsaverage.infl_left,
z_score.data["lh"],
hemi="left",
title=contrast_id,
colorbar=True,
threshold=3.0,
bg_map=fsaverage.sulc_right,
)

plotting.show()
2 changes: 2 additions & 0 deletions nilearn/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,10 @@
# TODO This import needs to be removed once the experimental surface API and
# its pytest fixtures are integrated into the stable API
from nilearn.experimental.surface.tests.conftest import ( # noqa: F401
drop_img_part,
make_mini_img,
mini_img,
mini_mask,
mini_mesh,
)

Expand Down