Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement GrappaNet in fastmri_examples #136

Open
mmuckley opened this issue Apr 15, 2021 · 5 comments
Open

Implement GrappaNet in fastmri_examples #136

mmuckley opened this issue Apr 15, 2021 · 5 comments
Labels
enhancement New feature or request

Comments

@mmuckley
Copy link
Contributor

This is a tracking issue for implementing the paper GrappaNet: Combining Parallel Imaging With Deep Learning for Multi-Coil MRI Reconstruction by A. Sriram, et al. This paper wasn't open-sourced, but we could welcome an implementation from the community that is able to reproduce results at key operating points.

@mmuckley mmuckley changed the title Implement GRAPPANet in fastmri_examples Implement GrappaNet in fastmri_examples Apr 15, 2021
@mmuckley mmuckley added the enhancement New feature or request label Oct 11, 2021
@Aditya-Tejaswi
Copy link

Hey,

I'm trying to implement the GrappaNet. The GrappaNet paper introduces interesting concepts which are worth trying out. I've read the paper and I might have misinterpreted certain explanations and have a few questions, for which I require some help. They are:

  1. What do the data consistency operations do? The paper mentions that it ''simply copies all of the observed k-space samples to the correct locations in k-space''. When giving a 4x Grappa data as input, the output after the first convolutional network block (first 2 Unets) is a 2x undersampled k-space data. Does it mean that, in the output after the 1st convolutional network block, if the network predicts some points in k-space which are supposed to be zero-filled, those points are shifted to lines which are actually sampled in a 2x Grappa sampling?

  2. The GRAPPA Layer estimates the Grappa kernel for each scan. It then convolves with the output of the 1st convolutional network block. This would mean that it fills in the missing k-space points of a 2x Grappa. If all the points are filled after this step then what does the data consistency operations do in the 2nd convolutional network block? Where are the observed k-space lines copied to?

I would really appreciate if someone can clear my doubts/correct my interpretations. Thanks in advance for your help 👍

@mmuckley
Copy link
Contributor Author

@anuroopsriram would you be able to help with this?

@salammemphis
Copy link

Hi All

I was thrilled by the GrappaNet work and wanted to apply on our dataset. I was not able to find the codebase hence tried to implement it. My implementation may not be exactly same as the author described in the paper due to the lack of proper understanding of various parameters and internal details. I would be glad if someone takes little time and verify following implementation.

https://github.com/salammemphis/GrappaNet

I appreciate your help to correct the implementation

-Shahinur

@anuroopsriram
Copy link
Contributor

Hi Shahinur:
I'm the first author of the GrappaNet paper. I just took a quick look at your code. It generally looks to be inline with the original implementation. One point of difference: I used the Adam optimizer to fit the grappa kernel, instead of the traditional methods.

@salammemphis
Copy link

@anuroopsriram Thank you so much for taking time and reviewing the implementation. I will replace RMSprop with Adam optimizer. I am trying to apply your methods to our dataset. I need to convert GRAPPA kernet estimation from plain python/eager mode to tensorflow graph.

Best Regards,
-Shahinur

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants