Skip to content

Katalip/ca-stinv-cnn

Repository files navigation

🔬 Improving Stain Invariance of CNNs

Full Title: Improving Stain Invariance of CNNs for Segmentation by Fusing Channel Attention and Domain-Adversarial Training

Kudaibergen Abutalip, Numan Saeed, Mustaqeem Khan, Abdulmotaleb El Saddik
Accepted at Medical Imaging with Deep Learning 2023, Nashville, USA
Paper Link

Thank you for expressing interest in our work

📖 Navigation

Environment Setup
  1. Create an environment:
conda create -n <env_name> python=3.9
  1. Install necessary dependencies
pip install -r requirements.txt
Repository Structure
Descriptions of each file can be found below
|   .gitignore
|   environment.yml
|   README.md
|   requirements.txt
|   LICENSE
|   pyproject.toml
|   .flake8
|   .pre-commit-config.yaml
|
\---src
    |   train.py -> Training script
    |   val.py -> Validation and testing script
    |   run_components.py -> Functions for training/evaluating/testing
    |
    +---cfg
    |       resnet.yml -> Config file for ResNet
    |       convnext.yml -> Config file for ConvNeXt
    |
    +---modules
    |   |   augs.py -> Augmentations used in the study
    |   |   dataset.py -> Data classess. Standard dataloaders are used
    |   |   macenko_torch.py -> Edited version of Macenko normalization in pytorch.
    |   |                       We added a small function for getting optimal stain vectors
    |   |   metrics.py -> Main metrics: dice, precision, recall
    |   |   utils.py -> Some helper functions
    |   |
    |   +---models
    |   |   |   convnext.py -> Implementation of ConvNeXt from their official repository
    |   |   |   convnext_smp_unet_he.py -> Unet with ConvNeXt as backbone, stain-invariant training branch, and channel attention
    |   |   |   resnet_smp_unet_he.py -> Unet with ResNet as backbone, stain-invariant training branch, and channel attention
    |   |   |   stinv_training.py -> Domain-predictor and gradient reversal
    |   |   |   isw.py -> Reimplementation of instance-selective whitening
    |   |   |   cov_attention.py -> Proposed channel attention mechanism
    |   |
    |   +---stainspec -> This folder contains official implementation of one of the compared methods
Config File Explanation
Train:
  experiment_name: Name for the experiment (training run) 
  device: GPU or CPU, default: 'cuda'
  epochs: N of epochs for training
  val_epoch: Frequency of validation during training
  checkpoint_epoch: When to save model state
  start_epoch: Define if resuming from previous run
Data:
  train_imgs: Path to the training imgs of HUBMAP_HPA_22
  masks: Path to the training masks of HUBMAP_HPA_22
  labels: Path to the csv file with metadata of HUBMAP_HPA_22
  nfolds: Number of folds
  fold: Which fold to use
  seed: Random seed. Default: 309  
Loader:
  batch_size: Batch size for dataloaders
  num_workers: N of workers for dataloaders
Architecture:
  encoder: Backbone name. Either 'resnet50' or 'convnext_tiny'  
  weights: Initialized from 'segmentation models pytorch' pretrained weights for resnet, and loaded from .ckpt file for convnext
Logging:
  wandb_project: Wandb project name
Eval:
  checkpoint_epoch: Load model state from this epoch
  neptune:
    root: Directory that contains NEPTUNE img subfolders (each folder contains imgs prepared with different stain)
    he: Folder name for imgs stained with HE
    pas: Folder name for imgs stained with PAS
    sil: Folder name for imgs stained with SIL
    tri: Folder name for imgs stained with TRI
  aidpath:
    imgs: Path to the training imgs of AIDPATH
    masks: Path to the training masks of AIDPATH
  hubmap21_kidney:
    imgs: Path to the training imgs of HUBMAP 21 Kidney
    masks: Path to the training masks of HUBMAP 21 Kidney
Datasets
We provide the links below and give a short description of their origin.

HPA + HuBMAP 2022. Human Protein Atlas (HPA) is a Swedish-based program (make a link), and The Human BioMolecular Atlas Program (HuBMAP) details its data contributors (US) here. Description of the dataset. Download link It is important to mention that the test set was not available during this study and this download page has been created recently

The Nephrotic Syndrome Study Network (NEPTUNE) is a North American multi-center consortium. We use a subset of this dataset that contains only glomeruli with annotations of Bowman’s space to match the training data. Samples were collected across 29 enrollment centers (US and Canada). Description. The download link is available at the bottom as online supplemental material (we use files named with 'glom_capsule').

Academia and Industry Collaboration for Digital Pathology (AIDPATH) is a Europen project. The data is collected in Spain and hosted by Mendeley. Description. Download

WSIs in HuBMAP21 Kidney should come from the data contributors that can be viewed by the link provided above. Description. Download (Data section)

We do not perform any specific preprocessing. Training images are resized to 768x768, while test samples are resized to sizes that match stats (pixel size, magnification) of the train data. NEPTUNE images to 480x480 AIDPATH samples to 256x256 HuBMAP21 Kidney WSIs to 224x224

Train

To train the model:

cd src
python train.py <encoder>.yml

E.g.

cd src
python train.py resnet.yml

Checkpoints and logs are stored in the Experiments folder in the parent directory and also logged with wandb

Evaluate

To evaluate the model:

cd src
python val.py <encoder>.yml

E.g.

cd src
python val.py resnet.yml

Logs are stored in the Experiments folder in the parent directory

Citation

@article{Abutalip2023ImprovingSI,
  title={Improving Stain Invariance of CNNs for Segmentation by Fusing Channel Attention and Domain-Adversarial Training},
  author={Kudaibergen Abutalip and Numan Saeed and Mustaqeem Khan and Abdulmotaleb El Saddik},
  journal={ArXiv},
  year={2023},
  volume={abs/2304.11445},
  url={https://api.semanticscholar.org/CorpusID:258298481}
}```

About

Code for "Improving Stain Invariance of CNNs for Segmentation by Fusing Channel Attention and Domain-Adversarial Training"

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages