Skip to content

Releases: ME-ICA/tedana

24.0.1

10 May 03:19
12aea7f
Compare
Choose a tag to compare

Release Notes

Summary

  • We use bokeh to generate interactive figures in our html report. bokeh v 3.4.0 included some under-documented changes that caused tedana to crash and other problems with interactivity. This is fixed in #1068
  • We create an adaptive mask that lists the number of "good echoes" in each voxel. This is used so that a voxel with a single good echo is retained, but steps that include fitting values across echoes are limited to voxels with more good echoes. We noticed a few places where our description of how the adpative mask was created didn't match what was actually happening. These are bugs that needed to get fixed, but our underlying threshold are arbitrary. If these fixes are adversely affecting the retained voxels, please please up and we can examine tweaking thresholds or adding other options.
    • The adaptive mask uses percentiles across all voxel values as part of a threshold calculation. Only voxels within a a general or user-supplied mask were supposed to be used, but we were applying that mask after calculating percentiles. This is now fixed (#1060). In practice this will slightly raise thresholds and reduce the number of voxels that survive the adaptive mask. An example of how this could alter the mask is here.
    • The adapative mask was intended to store the last good echo. That is, a voxel where echoes 1 and 3 are above threshold should be 3 in the adaptive mask even if echo 2 was below threshold. The code was just counting the number of good echoes and storing 2. This is fixed in #1061
  • As part of the adaptive masking bug fixes we also added another option for calculating an adaptive mask ( #1057 ) that removes voxels where later echoes have larger values than earlier echoes. This might be useful for typical multi-echo data, but could cause problems if echoes are temporally close and there is a downward trend, but not every value decreases. As part of this addition, there is now a --masktype option where one can input dropout, our current and default method, decay this new method, or both.
  • We have also added several long-requested visualizations to our html report. Descriptions of the new visualizations are in our documentation The additions include:
    • A visualization of the adaptive mask to show which voxels are retained in the optimally combined image and which are used T2* and S0 fits as part of the ICA denoising process. #1073
    • Mean T2* and S0* fits are calculated and used as weights for the optimal combination of echoes. We were not calculating nor saving the fit quality. The root mean square error (RMSE) for this fits are now saved and presented in our report #1044
    • As part of previously added visualizations and these new visualizations, we belatedly realized we were using an parameter that was only added to nilearn in v0.10.3 so we've raised the minimum accepted version number for nilearn #1094

🐛 Bug Fixes

  • Limit current adaptive mask method to brain mask by @tsalo in #1060
  • Identify the last good echo in adaptive mask instead of sum of good echoes by @tsalo in #1061
  • FIX load user-defined mask as expected by plot_adaptive_mask by @mvdoc in #1079
  • Fix dynamic reports with bokeh 3.4.0 problems by @handwerkerd in #1068
  • minimum nilearn to 0.10.3 by @handwerkerd in #1094

Changes

  • Add --masktype option to control adaptive mask method by @tsalo in #1057
  • Add adaptive mask plot to report by @tsalo in #1073
  • Output RMSE map and time series for decay model fit by @tsalo in #1044
  • [DOC] desc-optcomDenoised -> desc-denoised by @mvdoc in #1080
  • docs: add mvdoc as a contributor for code, bug, and doc by @allcontributors in #1082

New Contributors

Full Changelog: 24.0.0...24.0.1

24.0.0

22 Mar 14:57
f084dc4
Compare
Choose a tag to compare

Release Notes

Summary

We have continued to make under-the-hood changes and improvements to documentation.
Several key changes may be noticable to users.

  • By default, tedana has been saving 4D volumes of the high-kappa components Accepted_bold.nii.gz
    and the low-kappa components 'Rejected_bold.nii.gzeven though very few people use them and they use a lot of space. These will now only be saved if the program is run with--verbose. Additionally our final denoised time series was called desc-optcomDenoised_bold.nii.gzand this created confusion. It is now calleddesc-denoised_bold.nii.gz`. This will break pipelines that looked for a file with the
    previous name.
    #1033
  • We noticed a small difference between the decision tree implemented in
    MEICA v2.5 and the tree we were calling kundu.
    We have renamed our existing tree tedana_orig and there is now a meica tree that should match the
    MEICA method. In practice, the results will be identical or meica will accept additional components.
    The additionally accepted components can have substantive variance and, upon visual inspection usually
    looked like they should have been rejected. Therefore, we've kept the same default, but give both options
    to users. #952
  • Different metrics, like kappa and rho, are calculated for each ICA component. While the code allowed
    for a range of different metrics, the list that was calculated when tedana was run was impossible to
    change without editing the code. The metrics that were already specified in the decision tree json files
    will now be the ones calculated. The actual metric calculations still need to be defined within the code,
    but this change makes it practical to add a range of additional metrics that can vary by decision tree.
    #969
  • The tedana_report.html file now includes the mean T2* and S0 maps used in calculations
    #1040, consistent orientations for all images of
    brain slices #1045, version numbers for key python
    packages used during execution #1014, and the reference
    list is now properly rendered #1001.

Changes

New Contributors

Full Changelog: 23.0.2...24.0.0

23.0.2

21 Nov 14:08
1c3f93e
Compare
Choose a tag to compare

Summary

These changes includes a lot of documentation updates, logging of python and software versions in tedana_report (#747), fixing a bug where one could not specify PCA variance explained from the command line interface (#950), stricter code style rules along with pre-commits, cleaning up code in several places including several places where we were unnecessarily using old versions of python modules (#998), and updating to allow tedana to run with python version 3.12 (#999)

What's Changed

New Contributors

Full Changelog: 23.0.1...23.0.2

23.0.1

11 May 18:22
e94b03a
Compare
Choose a tag to compare

Release Notes

Most of these changes were made for v23.0.0, but the package did not build for pip so the descriptive release notes are stored with this version.

This release changes many internal aspects of the code, will make future improvements easier, and will hopefully make it easier for more people to understand their results and contribute. The denoising results should be identical. Right before releasing this new version, we released version 0.0.13, which is the last version of the older code.

User-facing changes

  • Breaking change: tedana can no longer be used to manually change component classifications. A separate program, ica_reclassify, can be used for this. This makes it easier for programs like Rica to output a list of component numbers to change and to then change them with ica_reclassify. Internally a massive portion of the tedana workflow code was a mess of conditional statements that were designed just so that this functionality could be retained within tedana. By separating out ica_reclassify the tedana code is more comprehensible and adaptable.
  • Breaking change: No components are classified as ignored. Ignored has long confused users. It was intended to identify components with such low variation that it was not worth deciding whether to lose a statistical degree of freedom by rejecting them. They were treated identically to accepted components. Now they are classified as accepted and tagged as Low variance or Borderline Accept. This classification_tag now appears on the html report of the results and the component table file.
  • Breaking change: In the component table file classification_tag has replaced rationale. Since the tags use words and one can assign more than one tag to each component, these are both more informative and more flexible than the older rationale numerical codes.
  • It is now possible to select different decision trees for component selection using the --tree option. The default tree is kundu and that should replicate the current outputs. We also include minimal which is a simpler tree that is intended to provide more consistent results across a study, but needs more testing and validation and may still change. Flow charts for these two options are here.
  • Anyone can create their own decision tree. If one is using metrics that are already calculated, like kappa and rho, and doing greater/less than comparisons, one can make a decision tree with a user-provided json file and the --tree option. More complex calculations might require editing the tedana python code. This change also means any metric that has one value per component can be used in a selection process. This makes it possible to combine the multi-echo metrics used in tedana with other selection metrics, such as correlations to head motion. The documentation includes instructions on building and understanding this component selection process.
  • Additional files are saved which store key internal calculations and what steps changed the accept vs reject classifications for each component. The documentation includes descriptions of the newly outputted files and file contents. These includes:
    • A registry of all files outputted by tedana. This allows for multiple file naming methods and means internal and external programs that want to interact with the tedana outputs just need to load this file.
    • A file of all the metrics calculated across components, such as the kappa and rho elbow thresholds
    • A decision tree file which records the exact decision tree that was run on the data and includes metrics calculated and component classifications changed in each step of the process
    • A component status table that is summarizes each components classification at each step of the decision tree

Under-the-hood changes

  • The component classification process that designates components as “accepted” or “rejected” was completely rewritten so that every step in the process is modular and the inputs and outputs of every step are logged.
  • Moved towards using the terminology of “Component Selection” rather than “Decision Tree” to refer to the code that’s part of the selection process. “Decision Tree” is used to more specifically to refer to the steps to classify components.
    • ComponentSelector object created to include common elements from the selection process including the component_table and information about what happens along every step of the decision tree. Additional information that will be stored in ComponentSelector and saved in files (as described above) includes component_table, cross_component_metrics, component_status_table, and tree
    • The new class is defined in ./selection/component_selector.py, the functions that define each node of a decision tree are in ./section/selection_nodes.py and some key common functions used by selection_nodes are in ./selection/selection_utils.py
    • By convention, functions in selection_nodes.py that can change component classifications, begin with dec_ for decision and functions that calculate cross_component_metrics begin with calc_
    • A key function in selection_nodes.py is dec_left_op_right which can be used to change classifications based on the intersection of 1-3 boolean statements. This means most of the decision tree is modular functions that calculate cross_component_metrics and then tests of boolean conditional statements.
  • When defining a decision tree a list of necessary_metrics are required and, when a tree is executed, the used_metrics are saved. This information is both a good internal check and can potentially be used to calculate metrics as defined in a tree rather than separately specifying the metrics to calculate and the tree to use.
  • io.py is now used to output a registry (default is desc-tedana_registry.json) and can be used by other programs to read in files generated by tedana (i.e. Load the optimcally combined time series and ICA mixing matrix from the output of tedana rather than needing to input the names of each file separately)
  • Some terminology changes, such as using component_table instead of comptable in code
  • integration tests now store testing data in .testing_data_cache and only download data if the data on OSF was updated more recently than the local data.
  • Nearly 100% of the new code and 98% of all tedana code is covered by integration testing.
  • Tedana python package management now uses pyproject.toml
  • Possible breaking change Minimum python version is now 3.8 and minimum pandas version is now 2.0 (might cause problems if the same python environment is used for packages that require older versions of pandas)

Changes

Full Changelog: 0.0.13...23.0.1

23.0.1rc0

11 May 18:20
e94b03a
Compare
Choose a tag to compare
23.0.1rc0 Pre-release
Pre-release

Release Notes

Version 23.0.0 was released, but did not build correctly for pip. This release fixed that issue.

What's Changed

Full Changelog: 23.0.0...23.0.1rc0

23.0.0

11 May 18:12
86a8139
Compare
Choose a tag to compare

Release Notes

This release changes many internal aspects of the code, but there was a small bug that prevented it from building for pip. Since this version cannot be installed through pip the descriptive release notes are included with Version 23.0.1

Changes

Full Changelog: 0.0.13...23.0.0

0.0.13

11 May 18:03
8285c15
Compare
Choose a tag to compare

Release Notes

This is the last release before refactoring of large portions of the code.

Breaking Changes

  • Corrected a bug where the component classification process should have calculated a threshold on a sorted list of component variances, but it was calculated on an unsorted list. (#938)
  • In v0.0.12 we changed the default method for selecting the number of components from MDL to AIC, but later realized this was only implemented when run through the python API, but not from the command line. Now AIC is the default for both. (#877)

Additional changes to highlight

  • Optimization curves and additional info for the PCA dimensionality reduction step are saved to help users and developers identify problems with the step that identifies the number of components to use (#839)
  • Added Python 3.10 compatibility (#818)
  • Using BibTeX instead of duecredit for listing references so warnings from users not having duecredit installed will finally be gone (#875)
  • Tedana python package management now uses setup.cfg (#874)

All changes since last stable release

  • Generalize installation instructions to work with Windows in CONTRIBUTING by @aryangupta701 in #846
  • [MAINT] Switch to setup.cfg-based configuration by @tsalo in #874
  • [FIX] Add function to prep data for JSON serialization by @jbteves in #859
  • [ENH, FIX] PCA variance enhancements and consistency improvements by @handwerkerd in #877
  • Print optimal number of maPCA components and plot optimization curves by @eurunuela in #839
  • [DOC] Add information about using tedana with fMRIPrep v21.0.0 by @tsalo in #847
  • [REF] Replace duecredit with BibTeX by @tsalo in #875
  • Update CONTRIBUTING.md by @jbteves in #885
  • [REF] Suppresses divide by 0 warning by @jbteves in #786
  • Add links to several multi-echo datasets by @tsalo in #895
  • Add F-T2 and F-S0 maps to verbose outputs by @tsalo in #893
  • [MAINT] Add 3.10 unit test, compatibilities in setup.cfg by @jbteves in #818
  • [FIX] Use capital names in desc-ICAOrth_mixing.tsv columns by @pablosmig in #906
  • [DOC] Add documentation page on denoising approaches by @tsalo in #823
  • docs: add giadaan as a contributor for doc by @allcontributors in #916
  • Add Nashiro dataset to documentation by @Kasambx in #912
  • Fix example code in denoising documentation by @tsalo in #917
  • Sorting varex for decision tree criterion I011 by @handwerkerd in #924
  • [DOC] Remove Josh as maintainer by @jbteves in #928
  • add pandas version check >= 1.5.2 and mod behavior by @pmolfese in #938

New Contributors

Full Changelog: 0.0.12...0.0.13

0.0.12

14 Apr 13:35
863304b
Compare
Choose a tag to compare

Summary

This would ordinarily not have been released, but an issue with one of our dependencies means that people cannot install tedana right now. The most notable change (which will potentially change your results!) is that PCA is now defaulting to the "aic" criterion rather than the "mdl" criterion.

What's Changed

  • [DOC] Add JOSS badges by @tsalo in #815
  • [FIX] Fixes broken component figures in report when there are more than 99 components by @manfredg in #824
  • [DOC] Add manfredg as a contributor for code by @allcontributors in #825
  • DOC: Use RST link for ME-ICA by @effigies in #832
  • [DOC] Fixing a bunch of warnings & rendering issues in the documentation by @handwerkerd in #840
  • [DOC] Replace mentions of Gitter with Mattermost by @tsalo in #842
  • [FIX] The rationale column of comptable gets updated when no manacc is given by @eurunuela in #855
  • Made AIC the default maPCA option by @eurunuela in #849
  • [DOC] Improve logging of component table-based manual classification by @tsalo in #852
  • [FIX] Add jinja2 version pin as workaround by @jbteves in #870

New Contributors

Full Changelog: 0.0.11...0.0.12

0.0.11

30 Sep 14:03
4d5c645
Compare
Choose a tag to compare

Release Notes

Tedana's 0.0.11 release includes a number of bug fixes and enhancements, and it's associated with publication of our Journal of Open Source Software (JOSS) paper! Beyond the JOSS paper, two major changes in this release are (1) outputs from the tedana and t2smap workflows are now BIDS compatible, and (2) we have overhauled how masking is performed in the tedana workflow, so that improved brain coverage is retained in the denoised data, while the necessary requirements for component classification are met.

🔧 Breaking changes

  • The tedana and t2smap workflows now generate BIDS-compatible outputs, both in terms of file formats and file names.
  • Within the tedana workflow, T2* estimation, optimal combination, and denoising are performed on a more liberal brain mask, while TE-dependence and component classification are performed on a reduced version of the mask, in order to retain the increased coverage made possible with multi-echo EPI.
  • When running tedana on a user-provided mixing matrix, the order and signs of the components are no longer modified. This will not affect classification or the interactive reports, but the mixing matrix will be different.

✨ Enhancements

  • tedana interactive reports now include carpet plots.
  • The organization of the documentation site has been overhauled to be easier to navigate.
  • We have added documentation about how to use tedana with fMRIPrep, along with a gist that should work on current versions of fMRIPrep.
  • Metric calculation is now more modular, which will make it easier to debug and apply in other classification decision trees.

🐛 Bug fixes

  • One component was not rendering in interactive reports, but this is fixed now.
  • Inputs are now validated to ensure that multi-file inputs are not interpreted as single z-concatenated files.

Changes since last stable release

  • [JOSS] Add accepted JOSS manuscript to main (#813) @tsalo
  • [FIX] Check data type in io.load_data (#802) @tsalo
  • [DOC] Fix link to developer guidelines in README (#797) @tsalo
  • [FIX] Figures of components with index 0 get rendered now (#793) @eurunuela
  • [DOC] Adds NIMH CMN video (#792) @jbteves
  • [STY] Use black and isort to manage library code style (#758) @tsalo
  • [DOC] Generalize preprocessing recommendations (#769) @tsalo
  • [DOC] Add fMRIPrep collection information to FAQ (#773) @tsalo
  • [DOC] Add link to EuskalIBUR dataset in documentation (#780) @tsalo
  • [FIX] Add resources folder to package data (#772) @tsalo
  • [ENH] Use different masking thresholds for denoising and classification (#736) @tsalo
  • [DOC, MAINT] Updated dependency version numbers (#763) @handwerkerd
  • [REF] Move logger management to new functions (#750) @tsalo
  • [FIX] Ignore non-significant kappa elbow when no non-significant kappa values exist (#760) @tsalo
  • [ENH] Coerce images to 32-bit (#759) @jbteves
  • [ENH] Add carpet plot to outputs (#696) @tsalo
  • [FIX] Correct manacc documentation and check for associated inputs (#754) @tsalo
  • [DOC] Reorganize documentation (#740) @tsalo
  • [REF] Do not modify mixing matrix with sign-flipping (#749) @tsalo
  • [REF] Eliminate component sorting from metric calculation (#741) @tsalo
  • [FIX] Update apt in CircleCI (#746) @notZaki
  • [DOC] Update resource page with dataset and link to Dash app visualizations (#745) @jsheunis
  • [DOC] Clarify communication pathways (#742) @tsalo
  • [FIX] Disable report logging during ICA restart loop (#743) @tsalo
  • [REF] Replace metric dependency dictionaries with json file (#739) @tsalo
  • [FIX] Add references back into the HTML report (#737) @tsalo
  • [ENH] Allows iterative clustering (#732) @jbteves
  • [REF] Modularize metric calculation (#591) @tsalo
  • Rename sphinx functions to fix building error for docs (#727) @eurunuela
  • [ENH] Generate BIDS Derivatives-compatible outputs (#691) @tsalo

0.0.11rc1

20 Aug 15:09
e8a43eb
Compare
Choose a tag to compare

Release Notes

We have made this release candidate to test recent enhancements. Please open issues if you experience any problems.

Changes

  • [DOC] Add link to EuskalIBUR dataset in documentation (#780) @tsalo
  • [FIX] Add resources folder to package data (#772) @tsalo
  • [ENH] Use different masking thresholds for denoising and classification (#736) @tsalo
  • [DOC, MAINT] Updated dependency version numbers (#763) @handwerkerd
  • [REF] Move logger management to new functions (#750) @tsalo
  • [FIX] Ignore non-significant kappa elbow when no non-significant kappa values exist (#760) @tsalo
  • [ENH] Coerce images to 32-bit (#759) @jbteves
  • [ENH] Add carpet plot to outputs (#696) @tsalo
  • [FIX] Correct manacc documentation and check for associated inputs (#754) @tsalo
  • [DOC] Reorganize documentation (#740) @tsalo
  • [REF] Do not modify mixing matrix with sign-flipping (#749) @tsalo
  • [REF] Eliminate component sorting from metric calculation (#741) @tsalo
  • [FIX] Update apt in CircleCI (#746) @notZaki
  • [DOC] Update resource page with dataset and link to Dash app visualizations (#745) @jsheunis
  • [DOC] Clarify communication pathways (#742) @tsalo
  • [FIX] Disable report logging during ICA restart loop (#743) @tsalo
  • [REF] Replace metric dependency dictionaries with json file (#739) @tsalo
  • [FIX] Add references back into the HTML report (#737) @tsalo
  • [ENH] Allows iterative clustering (#732) @jbteves
  • [REF] Modularize metric calculation (#591) @tsalo
  • Rename sphinx functions to fix building error for docs (#727) @eurunuela
  • [ENH] Generate BIDS Derivatives-compatible outputs (#691) @tsalo