-
Notifications
You must be signed in to change notification settings - Fork 575
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ENH] Adapt FirstLevelModel to work with surface data objects from new API #4126
Merged
Merged
Changes from 35 commits
Commits
Show all changes
46 commits
Select commit
Hold shift + click to select a range
1b785fa
Add masker structure
5459d6b
Define surfaceimage
006e258
Update
c4ddf21
Update docstring
0da09ae
Skip niimg checks if surface
b5a4de7
Fix code breaking tests
5a82233
Merge branch 'main' into flm_surfaceimage
4a1c25a
Add some tests
415fa4f
Add more tests
3928ece
Refactor tests
a51abfc
Replace parameters for fitted surface masker
66def1e
Add test
93230df
Remove breakpoint
8dbdd1d
Adapt compute contrast
7b5b130
Handle one hemisphere
429442d
Merge branch 'main' into flm_surfaceimage
99bf608
Refactor
50f780b
Resolve surface case for default mask
3028d55
Fix doctsring
a1f2112
Added GLM example
bthirion 1698323
Remove default arg
f01b667
Fix conflict
8167603
Merge branch 'main' into flm_surfaceimage
ymzayek bdf2f23
Formatting
5f80c63
Working report with no plots
ddc8ef1
Allow black on example
7d65d14
Revert "Working report with no plots"
4d2e782
NotImplementedError for reports
5c6f759
Move test
bcac629
Add test
3c7ae7f
Clarify doctsring
04a14cc
Refactor mask application
e08831d
Method name and docstring
6126ed8
Merge branch 'main' into flm_surfaceimage
Remi-Gau 9f4d8d0
extract example as experimental
Remi-Gau 6ba25f8
move example
Remi-Gau 455887c
update bids example
Remi-Gau 85d6e8a
clean up
Remi-Gau a5de9b3
Merge remote-tracking branch 'upstream/main' into pr/ymzayek/4126
Remi-Gau 661843e
adapt fetcher
Remi-Gau a0138f0
use confounds
Remi-Gau f203e2c
Merge branch 'main' into flm_surfaceimage
Remi-Gau e259384
Merge branch 'main' into flm_surfaceimage
Remi-Gau a99bf90
fix test
Remi-Gau 079bb38
fix examples
Remi-Gau 2c0c193
refactor
Remi-Gau File filter
Filter by extension
Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
There are no files selected for viewing
239 changes: 239 additions & 0 deletions
239
examples/08_experimental/plot_localizer_surface_analysis.py
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,239 @@ | ||
""" | ||
Example of surface-based first-level analysis | ||
============================================= | ||
|
||
.. warning:: | ||
|
||
This example is adapted from | ||
:ref:`sphx_glr_auto_examples_04_glm_first_level_plot_localizer_surface_analysis.py`. # noqa | ||
to show how to use the new tentative API for surface images in nilearn. | ||
|
||
This functionality is provided | ||
by the :mod:`nilearn.experimental.surface` module. | ||
|
||
It is still incomplete and subject to change without a deprecation cycle. | ||
|
||
Please participate in the discussion on GitHub! | ||
|
||
A full step-by-step example of fitting a :term:`GLM` to experimental | ||
data sampled on the cortical surface and visualizing the results. | ||
|
||
More specifically: | ||
|
||
1. A sequence of :term:`fMRI` volumes is loaded. | ||
2. :term:`fMRI` data are projected onto a reference cortical surface | ||
(the FreeSurfer template, fsaverage). | ||
3. A :term:`GLM` is applied to the dataset | ||
(effect/covariance, then contrast estimation). | ||
|
||
The result of the analysis are statistical maps that are defined on the brain | ||
mesh. We display them using Nilearn capabilities. | ||
|
||
The projection of :term:`fMRI` data onto a given brain :term:`mesh` requires | ||
that both are initially defined in the same space. | ||
|
||
* The functional data should be coregistered to the anatomy from which the mesh | ||
was obtained. | ||
|
||
* Another possibility, used here, is to project | ||
the normalized :term:`fMRI` data to an :term:`MNI`-coregistered mesh, | ||
such as fsaverage. | ||
|
||
The advantage of this second approach is that it makes it easy to run | ||
second-level analyses on the surface. On the other hand, it is obviously less | ||
accurate than using a subject-tailored mesh. | ||
|
||
""" | ||
|
||
# %% | ||
# Prepare data and analysis parameters | ||
# ------------------------------------ | ||
# Prepare the timing parameters. | ||
t_r = 2.4 | ||
slice_time_ref = 0.5 | ||
|
||
# %% | ||
# Prepare the data. | ||
# First, the volume-based :term:`fMRI` data. | ||
from nilearn import datasets | ||
|
||
data = datasets.fetch_localizer_first_level() | ||
fmri_img = data.epi_img | ||
|
||
# %% | ||
# Second, the experimental paradigm. | ||
import pandas as pd | ||
|
||
events_file = data.events | ||
events = pd.read_table(events_file) | ||
|
||
# %% | ||
# Project the :term:`fMRI` image to the surface | ||
# --------------------------------------------- | ||
# | ||
# For this we need to get a :term:`mesh` | ||
# representing the geometry of the surface. | ||
# We could use an individual :term:`mesh`, | ||
# but we first resort to a standard :term:`mesh`, | ||
# the so-called fsaverage5 template from the FreeSurfer software. | ||
|
||
|
||
# %% | ||
# We use the new :class:`nilearn.experimental.surface.SurfaceImage` | ||
# to create an surface object instance | ||
# that contains both the mesh | ||
# (here we use the one from the fsaverage5 templates) | ||
# and the BOLD data that we project on the surface. | ||
from nilearn import surface | ||
from nilearn.experimental.surface import SurfaceImage, load_fsaverage | ||
|
||
fsaverage5 = load_fsaverage() | ||
texture_left = surface.vol_to_surf( | ||
fmri_img, fsaverage5["pial"]["left_hemisphere"] | ||
) | ||
texture_right = surface.vol_to_surf( | ||
fmri_img, fsaverage5["pial"]["right_hemisphere"] | ||
) | ||
image = SurfaceImage( | ||
mesh={ | ||
"lh": fsaverage5["pial"]["left_hemisphere"], | ||
"rh": fsaverage5["pial"]["right_hemisphere"], | ||
}, | ||
data={ | ||
"lh": texture_left.T, | ||
"rh": texture_right.T, | ||
}, | ||
) | ||
|
||
# %% | ||
# Perform first level analysis | ||
# ---------------------------- | ||
# | ||
# We can now simply run a GLM by directly passing | ||
# our :class:`nilearn.experimental.surface.SurfaceImage` instance | ||
# as input to FirstLevelModel.fit | ||
# | ||
# Here we use an :term:`HRF` model | ||
# containing the Glover model and its time derivative | ||
# The drift model is implicitly a cosine basis with a period cutoff at 128s. | ||
from nilearn.glm.first_level import FirstLevelModel | ||
|
||
glm = FirstLevelModel( | ||
t_r, | ||
slice_time_ref=slice_time_ref, | ||
hrf_model="glover + derivative", | ||
).fit(image, events) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. this change compare to the original example |
||
|
||
# %% | ||
# Estimate contrasts | ||
# ------------------ | ||
# Specify the contrasts. | ||
# | ||
# For practical purpose, we first generate an identity matrix whose size is | ||
# the number of columns of the design matrix. | ||
import numpy as np | ||
|
||
design_matrix = glm.design_matrices_[0] | ||
contrast_matrix = np.eye(design_matrix.shape[1]) | ||
|
||
# %% | ||
# At first, we create basic contrasts. | ||
basic_contrasts = { | ||
column: contrast_matrix[i] | ||
for i, column in enumerate(design_matrix.columns) | ||
} | ||
|
||
# %% | ||
# Next, we add some intermediate contrasts and | ||
# one :term:`contrast` adding all conditions with some auditory parts. | ||
basic_contrasts["audio"] = ( | ||
basic_contrasts["audio_left_hand_button_press"] | ||
+ basic_contrasts["audio_right_hand_button_press"] | ||
+ basic_contrasts["audio_computation"] | ||
+ basic_contrasts["sentence_listening"] | ||
) | ||
|
||
# one contrast adding all conditions involving instructions reading | ||
basic_contrasts["visual"] = ( | ||
basic_contrasts["visual_left_hand_button_press"] | ||
+ basic_contrasts["visual_right_hand_button_press"] | ||
+ basic_contrasts["visual_computation"] | ||
+ basic_contrasts["sentence_reading"] | ||
) | ||
|
||
# one contrast adding all conditions involving computation | ||
basic_contrasts["computation"] = ( | ||
basic_contrasts["visual_computation"] | ||
+ basic_contrasts["audio_computation"] | ||
) | ||
|
||
# one contrast adding all conditions involving sentences | ||
basic_contrasts["sentences"] = ( | ||
basic_contrasts["sentence_listening"] + basic_contrasts["sentence_reading"] | ||
) | ||
|
||
# %% | ||
# Finally, we create a dictionary of more relevant contrasts | ||
# | ||
# * 'left - right button press': probes motor activity | ||
# in left versus right button presses. | ||
# * 'audio - visual': probes the difference of activity between listening | ||
# to some content or reading the same type of content | ||
# (instructions, stories). | ||
# * 'computation - sentences': looks at the activity | ||
# when performing a mental computation task versus simply reading sentences. | ||
# | ||
# Of course, we could define other contrasts, | ||
# but we keep only 3 for simplicity. | ||
|
||
contrasts = { | ||
"left - right button press": ( | ||
basic_contrasts["audio_left_hand_button_press"] | ||
- basic_contrasts["audio_right_hand_button_press"] | ||
+ basic_contrasts["visual_left_hand_button_press"] | ||
- basic_contrasts["visual_right_hand_button_press"] | ||
), | ||
"audio - visual": basic_contrasts["audio"] - basic_contrasts["visual"], | ||
"computation - sentences": ( | ||
basic_contrasts["computation"] - basic_contrasts["sentences"] | ||
), | ||
} | ||
|
||
|
||
# %% | ||
# Let's estimate the contrasts by iterating over them. | ||
from nilearn import plotting | ||
|
||
fsaverage = datasets.fetch_surf_fsaverage() | ||
|
||
for index, (contrast_id, contrast_val) in enumerate(contrasts.items()): | ||
print( | ||
f" Contrast {index + 1:1} out of {len(contrasts)}: " | ||
f"{contrast_id}, right hemisphere" | ||
) | ||
# compute contrast-related statistics | ||
z_score = glm.compute_contrast(contrast_val, stat_type="t") | ||
|
||
# we plot it on the surface, on the inflated fsaverage mesh, | ||
# together with a suitable background to give an impression | ||
# of the cortex folding. | ||
plotting.plot_surf_stat_map( | ||
fsaverage.infl_right, | ||
z_score.data["rh"], | ||
hemi="right", | ||
title=contrast_id, | ||
colorbar=True, | ||
threshold=3.0, | ||
bg_map=fsaverage.sulc_right, | ||
) | ||
plotting.plot_surf_stat_map( | ||
fsaverage.infl_left, | ||
z_score.data["lh"], | ||
hemi="left", | ||
title=contrast_id, | ||
colorbar=True, | ||
threshold=3.0, | ||
bg_map=fsaverage.sulc_right, | ||
) | ||
|
||
plotting.show() |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this change compare to the original example