Skip to content

Latest commit

 

History

History
130 lines (110 loc) · 5.91 KB

README.md

File metadata and controls

130 lines (110 loc) · 5.91 KB

Co-VeGAN

This is the official implementation code for Co-VeGAN: Complex-Valued Generative Adversarial Network for Compressive Sensing MR Image Reconstruction by Bhavya Vasudeva*, Puneesh Deora*, Saumik Bhattacharya, Pyari Mohan Pradhan (*equal contribution).

Pre-requisites

The code was written with Python 3.6.8 with the following dependencies:

  • cuda release 9.0, V9.0.176
  • tensorflow 1.12.0
  • keras 2.2.4
  • numpy 1.16.4
  • scikit-image 0.15.0
  • matplotlib 3.1.0
  • nibabel 2.4.1
  • cuDNN 7.4.1

This code has been tested in Ubuntu 16.04.6 LTS with 4 NVIDIA GeForce GTX 1080 Ti GPUs (each with 11 GB RAM).

How to Use

Preparing data

  1. Downloading the dataset:

      MICCAI 2013 dataset:

  • The MICCAI 2013 grand challenge dataset can be downloaded from this webpage. It is required to fill a google form and register to be able to download the data.
  • Download and save the training-training and training-testing folders, which contain the training and testing data, respectively, into the repository folder.

      MRNet dataset:

  • The MRNet dataset can be downloaded from this webpage. It is required to register by filling the form at the end of the page to be able to download the data.
  • Download and save the train and valid folders, which contain the training and testing data, respectively, into the repository folder.

      fastMRI dataset:

  • The fastMRI dataset is available on this webpage. It is required to fill the form at the end of the page to receive the download links via email to download the data.
  • Download and save the knee_singlecoil_train folder, from which the training and testing data is created. Extract the files in a folder named singlecoil_train within the repository folder.
  1. Run the following command to create the GT dataset:
python dataset_load.py
  1. Run the following command to create the undersampled dataset:
python usamp_data.py
  1. These files would create the training data using MICCAI 2013 dataset. The variables dataset and mode can be changed in both the files to use MRNet or fastMRI datasets, or to create testing data.
  2. The masks folder contains the undersampling masks used in this work. The path for the mask can be modified in usamp_data.py, as required.

Training

  1. Move the files in complexnn folder to the repository folder.
  2. Run the following command to train the model, after checking the names of paths:

      For real-vaued datasets (MICCAI 2013 and MRNet):

python train_model.py

      For complex-vaued dataset (fastMRI):

python train_model_complex.py

Testing

Testing the trained model:

  1. Run the following command to test the model, after checking the names of paths:

      For real-vaued datasets (MICCAI 2013 and MRNet):

python test_model.py

      For complex-vaued dataset (fastMRI):

python test_model_complex.py

Testing the pre-trained model:

  1. The pre-trained generator weights for various undersampling patterns are available at:

      MICCAI 2013:

30% 1D-G  •   30% Radial  •   30% Spiral  •   20% 1D-G  •   10% 1D-G

      fastMRI:

30% 1D-G  •   30% Radial  •   30% Spiral  •   20% 1D-G  •   10% 1D-G

  1. Download the required weights in the repository folder.
  2. Run the following command, after changing the names of paths:

      For MICCAI 2013 dataset:

python test_model.py

      For fastMRI dataset:

python test_model_complex.py

Citation

If you find our research useful, please cite our work.

@InProceedings{Vasudeva_2022_WACV,
    author    = {Vasudeva, Bhavya and Deora, Puneesh and Bhattacharya, Saumik and Pradhan, Pyari Mohan},
    title     = {Compressed Sensing MRI Reconstruction With Co-VeGAN: Complex-Valued Generative Adversarial Network},
    booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
    month     = {January},
    year      = {2022},
    pages     = {672-681}
}

License

   Copyright 2020 Authors

   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License.