Skip to content

Xiyue-Wang/TransPath

Repository files navigation

TransPath

Transformer-based Unsupervised Contrastive Learning for Histopathological Image Classification (Medical Image Analysis)

The new better and stronger pre-trained transformers models (CTransPath) has been released.

Journal Link

Will update: CTransPath v2 coming soon (contains pre-training with over 100,000 WSIs) (Under Review).

It will be at least 5% better than CtransPath.

Hardware

  • 128GB of RAM
  • 32*Nvidia V100 32G GPUs

Preparations

1.Download all TCGA WSIs.

2.Download all PAIP WSI

New: So, there will be about 15,000,000 images.

Old: We crop these WSIs into patch images.we randomly select 100 images from each WSI.Finally,So, there will be about 2,700,521 unlabeled histopathological images.

Usage: Pre-Training Vision Transformers for histopathology images

It is recommended that you use CTransPath as the preferred histopathology images feature extractor

1.CTransPath

Usage: Preparation

Install the modified timm library

pip install timm-0.5.4.tar

The pre-trained models can be downloaded

Usage: Get frozen features
python get_features_CTransPath.py

It is recommended to first try to extract features at 1.0mpp, and then try other magnifications

Usage: Linear Classification

For linear classification on frozen features/weights

python ctrans_lincls.py
Usage: End-to-End Fine-tuning

Similar to Swin or ViT,please see the instructions or DEiT

2.MoCo v3

We also trained MoCo v3 on these histopathological images. The pre-trained models can be downloaded as following:

vit_small

Undated the latest weights have been uploaded (1/10/2022)

Usage: Self-supervised Pre-Training

please see the instructions

Usage: Get frozen features
python get_features_mocov3.py \
        -a vit_small
Usage: End-to-End Fine-tuning ViT

To perform end-to-end fine-tuning for ViT, use our script to convert the pre-trained ViT checkpoint to DEiT format:

python convert_to_deit.py \
  --input [your checkpoint path]/[your checkpoint file].pth.tar \
  --output [target checkpoint file].pth

Then run the training (in the DeiT repo) with the converted checkpoint:

python $DEIT_DIR/main.py \
  --resume [target checkpoint file].pth \
  --epochs 150

3.TransPath

The pre-trained models can be downloaded

These codes are partly based on byol and moco v2

Usage: Self-supervised Pre-Training
python main_byol_transpath.py \
--lr 0.0001 \
--batch-size 256 \
--dist-url 'tcp://localhost:10001' --multiprocessing-distributed --world-size 1 --rank 0 --mlp --moco-t 0.2 --aug-plus --cos
Usage: Get frozen features
python get_feature_transpath.py
Usage: End-to-End Fine-tuning

use our script to convert the pre-trained ViT checkpoint to Transformers format:

python convert_to_transpath.py 

Reference

Please open new threads or address all questions to xiyue.wang.scu@gmail.com

License

TransPath is released under the GPLv3 License and is available for non-commercial academic purposes.

Citation

Please use below to cite this paper if you find our work useful in your research.

@{wang2022,
  title={Transformer-based Unsupervised Contrastive Learning for Histopathological Image Classification},
  author={Wang, Xiyue and Yang, Sen and Zhang, Jun and Wang, Minghui and Zhang, Jing  and Yang, Wei and Huang, Junzhou  and Han, Xiao},
  journal={Medical Image Analysis},
  year={2022},
  publisher={Elsevier}
}
@inproceedings{wang2021transpath,
  title={TransPath: Transformer-Based Self-supervised Learning for Histopathological Image Classification},
  author={Wang, Xiyue and Yang, Sen and Zhang, Jun and Wang, Minghui and Zhang, Jing and Huang, Junzhou and Yang, Wei and Han, Xiao},
  booktitle={International Conference on Medical Image Computing and Computer-Assisted Intervention},
  pages={186--195},
  year={2021},
  organization={Springer}
}

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages