Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DOC] Add comparison of meaning differences of GLM between different libraries #4287

Open
wants to merge 8 commits into
base: main
Choose a base branch
from

Conversation

tpremrud
Copy link

Changes proposed in this pull request:

  • Added meaning_difference.rst as a documentation file for comparing GLM analysis levels between different libraries (i.e., FSL and SPM)
  • Modified the index page of GLM documentation module to include the documentation structure (i.e., drop-down list) for the meaning_difference.rst page.

Note: I am still uncertain if some of the wording and content are redundant (e.g., runs vs sessions, which I am certain are the same, but I included for clarity); please feel free to make any suggestions! :)

Copy link
Contributor

👋 @tpremrud Thanks for creating a PR!

Until this PR is ready for review, you can include the [WIP] tag in its title, or leave it as a github draft.

Please make sure it is compliant with our contributing guidelines. In particular, be sure it checks the boxes listed below.

  • PR has an interpretable title.
  • PR links to Github issue with mention Closes #XXXX (see our documentation on PR structure)
  • Code is PEP8-compliant (see our documentation on coding style)
  • Changelog or what's new entry in doc/changes/latest.rst (see our documentation on PR structure)

For new features:

  • There is at least one unit test per new function / class (see our documentation on testing)
  • The new feature is demoed in at least one relevant example.

For bug fixes:

  • There is at least one test that would fail under the original bug conditions.

We will review it as quick as possible, feel free to ping us with questions if needed.

tpremrud and others added 2 commits February 26, 2024 20:19
Co-authored-by: Taylor Salo <tsalo90@gmail.com>
Co-authored-by: Taylor Salo <tsalo90@gmail.com>
@Remi-Gau
Copy link
Collaborator

Thanks for starting this @tpremrud: I can already tell this will be usefull!

Couple of general comments.

I would suggest mentioning that we use the BIDS definitions of "run" in this document (and in general we are trying to move to make this use consistent in nilearn).

Also as far as I can tell all the mentions of the word "session" in your text should be replaced by "run".

https://bids-specification.readthedocs.io/en/latest/common-principles.html#definitions

Run - An uninterrupted repetition of data acquisition that has the same acquisition parameters and task (however events can change from run to run due to different subject response or randomized nature of the stimuli). Run is a synonym of a data acquisition. Note that "uninterrupted" may look different by modality due to the nature of the recording. For example, in MRI or MEG, if a subject leaves the scanner, the acquisition must be restarted. For some types of PET acquisitions, a subject may leave and re-enter the scanner without interrupting the scan.

I do not think we need the whole definition but at least the main bit and then a link to the BIDS glossary:

SPM uses the same notation as nilearn for analysis levels, with a note that a session still refers to an imaging session or a run, and within a run there could be multiple conditions (e.g., congruent and incongruent).
In this case, `SPM`_ provided `tutorials`_ and documentation, including `lectures`_, which one could learn to analyze their own fMRI data with the meaning of analysis levels being as follows:

* `First-level analysis in SPM`_: Analyze across sessions for a subject (i.e., more than one session of one subject)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suspect that we should mention somewhere that it is typical in the SPM workflow to put all runs in a single design matrix, where as nilearn typically will give you one design matrix per run.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Gotcha @Remi-Gau , that's a really great point! However, I'm wondering if I understand the concept correctly, so if there are different conditions, do they still count as a single run if the imaging acquisition is continuous, uninterrupted, and comprise of the same parameters?

I'm looking at nilearn example it seems like there are different conditions within a design matrix, but you said that nilearn gives one design matrix per run, so I'm just not sure if I understand the concept of run correctly as I should or not.

But the SPM really does have a design matrix that goes through all the runs in one matrix.

Thank you!!

Copy link

codecov bot commented Feb 27, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 92.12%. Comparing base (abb80ff) to head (729dd92).
Report is 21 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main    #4287      +/-   ##
==========================================
+ Coverage   91.85%   92.12%   +0.27%     
==========================================
  Files         144      143       -1     
  Lines       16419    16438      +19     
  Branches     3434     3444      +10     
==========================================
+ Hits        15082    15144      +62     
+ Misses        792      749      -43     
  Partials      545      545              
Flag Coverage Δ
macos-latest_3.10_test_plotting 91.92% <ø> (?)
macos-latest_3.11_test_plotting 91.92% <ø> (+0.06%) ⬆️
macos-latest_3.9_test_plotting 91.88% <ø> (?)
ubuntu-latest_3.10_test_plotting 91.92% <ø> (+0.06%) ⬆️
ubuntu-latest_3.11_test_plotting 91.92% <ø> (?)
ubuntu-latest_3.12_test_plotting 91.92% <ø> (?)
ubuntu-latest_3.12_test_pre 91.92% <ø> (?)
ubuntu-latest_3.8_test_min 68.87% <ø> (?)
ubuntu-latest_3.8_test_plot_min 91.63% <ø> (?)
ubuntu-latest_3.8_test_plotting 91.88% <ø> (?)
ubuntu-latest_3.9_test_plotting 91.88% <ø> (?)
windows-latest_3.10_test_plotting 91.89% <ø> (?)
windows-latest_3.11_test_plotting 91.89% <ø> (?)
windows-latest_3.12_test_plotting 91.89% <ø> (?)
windows-latest_3.8_test_plotting 91.85% <ø> (?)
windows-latest_3.9_test_plotting 91.86% <ø> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Add explanations for 1st and 2nd level for glm
3 participants