This repository if the official implementation for the paper:
Training Like a Medical Resident: Universal Medical Image Segmentation via Context Prior Learning
Yunhe Gao1, Zhuowei Li1, Di Liu 1, Mu Zhou1, Shaoting Zhang2, Dimitris N. Metaxas 1
1 Rutgers University 2 Shanghai Artificial Intelligence Laboratory
A major enduring focus of clinical workflows is disease analytics and diagnosis, leading to medical imaging datasets where the modalities and annotations are strongly tied to specific clinical objectives. To date, building task-specific segmentation models is intuitive yet a restrictive approach, lacking insights gained from widespread imaging cohorts. Inspired by the training of medical residents, we explore universal medical image segmentation, whose goal is to learn from diverse medical imaging sources covering a range of clinical targets, body regions, and image modalities. Following this paradigm, we propose Hermes, a context prior learning approach that addresses the challenges related to the heterogeneity on data, modality, and annotations in the proposed universal paradigm. In a collection of seven diverse datasets, we demonstrate the appealing merits of the universal paradigm over the traditional task-specific training paradigm. By leveraging the synergy among various tasks, Hermes shows superior performance and model scalability. Our in-depth investigation on two additional datasets reveals Hermes' strong capabilities for transfer learning, incremental learning, and generalization to different downstream tasks.
- 06/04/2023: Hermes paper uploaded to arXiv
- Release data preparation script.
- Release training code.
- Release model weights.
If you use Hermes in your research, please cite our paper:
@article{gao2023training,
title={Training Like a Medical Resident: Universal Medical Image Segmentation via Context Prior Learning},
author={Gao, Yunhe and Li, Zhuowei and Liu, Di and Zhou, Mu and Zhang, Shaoting and Meta, Dimitris N},
journal={arXiv preprint arXiv:2306.02416},
year={2023}
}
For questions and suggestions, please post a GitHub issue or contact us directly via email (yunhe.gao@rutgers.edu).