Skip to content

ucsd-fcrl/NeuralCT_new_seg_Final_v_ZC

Repository files navigation

NeuralCT with Spatially-aware Segmentation

Author: Zhennong Chen, PhD

This repo is for the paper: Motion Correction Image Reconstruction using NeuralCT Improves with Spatially Aware Object Segmentation
Authors: Zhennong Chen, Kunal Gupta, Francisco Contijoch
Please see our Poster here and our Paper here

Citation: Zhennong Chen, Kunal Gupta, Francisco Contijoch, "Motion Correction Image Reconstruction using NeuralCT Improves with Spatially Aware Object Segmentation", The 7th International Meeting on Image Formation in X-Ray Computed Tomography, June 2022.

Description

This work is based on an arXiv paper(github repo). In that paper, we proposed a implicit neural representation-based framework to correct the motion artifacts in the CT images. This framework, called "NeuralCT", takes CT sinograms as the input, uses a technique called "differentiable rendering" to optimize the estimation of object motion based on projections, and returns the time-resolved image without motion artifacts. See more details in the papers.

The main goal of this work is to extend NeuralCT to a more complicated scene, where there are multiple moving objects with distinct attenuations. This scene is closer to what we will see in clinical CT images (e.g., a moving LV with constrast and a RV without contrast).

Empiricially, we have found that the performance of NeuralCT is highly dependent on the model initialization driven by the segmentation of FBP images. Previously in arXiv paper, we used a Gaussian Mixture Model to segment different objects from FBP; in the more complicated scene, we determine to use a spatially-aware segmentation that leverages both spatial and intensity information of different objects. Concretely, this segmentation is done by defining ROIs then thresholding; future work may incorporate data-driven methods(i.e., deep learning methods).

User Guideline

Environment Setup

The entire code is containerized. This makes setting up environment swift and easy. Make sure you have nvidia-docker and Docker CE installed on your machine before going further.

  • You can set up the docker container by start_docker_neural_ct.sh.

Main Experiment

We simulated a scene with two moving dots with distinct attenuations and tested our NeuralCT on it. The user can define moving speed as well as the gantry offset.

  • main script: study_two_intensities.py
  • spatially-aware segmentation: jupyter_notebook/spatial_segmenter_zc.ipynb

Additional Experiment

Based on the single-dot experiment in arXiv paper, we added the quanta-counting noise to the sinogram to evaluate the impact of different Contrast-to-noise ratios (CNR) on the performance of NeuralCT.

  • main script: study_noise.py

Additional guidelines

see comments in the script

Please contact the author (chenzhennong@gmail.com or zhc043@eng.ucsd.edu) for any further questions.
For environement setup difficulty, please contact Kunal Gupta (k5gupta@eng.ucsd.edu) or Francisco Contijoch (fcontijoch@eng.ucsd.ddu) for help.

About

An implicit neural representation framework to correct motion artifacts from CT. Author: Zhennong Chen, PhD

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published