Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Inference #9

Open
chenyzzz opened this issue Apr 12, 2023 · 16 comments
Open

Inference #9

chenyzzz opened this issue Apr 12, 2023 · 16 comments

Comments

@chenyzzz
Copy link

Great job! Thanks! Will you upload the trained model in the future? It's something we can use to infer directly, without training. I don't know if it's okay to ask, thank you again!!!

@tiangexiang
Copy link
Collaborator

Thank you for your interest in our work! Unfortunately, we don't plan to release the trained model, since we have refactored the code quite a lot and the trained model cannot be loaded directly in the current repo due to inconsistent names/structures. Note that our method is dataset-specific, such that the trained model on one dataset can not be used to denoise any other datasets.
However, we do provide the denoised data for all the four datasets we presented in the paper: https://figshare.com/s/6275f40c32f67e3b6083
Hope this helps :)

@chenyzzz
Copy link
Author

Thank you for your interest in our work! Unfortunately, we don't plan to release the trained model, since we have refactored the code quite a lot and the trained model cannot be loaded directly in the current repo due to inconsistent names/structures. Note that our method is dataset-specific, such that the trained model on one dataset can not be used to denoise any other datasets. However, we do provide the denoised data for all the four datasets we presented in the paper: https://figshare.com/s/6275f40c32f67e3b6083 Hope this helps :)

Thanks for your answer, it was very helpful! I have a few more questions. The paper notes that currently DDM2 can only be used with certain data sets (those 4 brains?). Can I put it in if I want to use my CARDIAC IMAGE data set? If my code is weak, is it impossible to adjust the code? I'm sorry I have so many questions. Thanks again for your reply! Thank you!!

@tiangexiang
Copy link
Collaborator

Hi, yes it is absolutely fine to use DDM2 on different datasets. However, you have to make sure that the dataset you are using is still a 4D volume [H x W x D x T], while T indicates the number of different observations of the same 3D volume. Then I believe you can train DDM2 on a new dataset seamlessly.

@mariusarvinte
Copy link

Hi, yes it is absolutely fine to use DDM2 on different datasets. However, you have to make sure that the dataset you are using is still a 4D volume [H x W x D x T], while T indicates the number of different observations of the same 3D volume. Then I believe you can train DDM2 on a new dataset seamlessly.

Could you please comment on how can one reproduce the experiments with n=1 in the Appendix of the paper (T=1 in your reply) with this codebase? What should the X and condition signals returned by the dataloader be in this case?

@tiangexiang
Copy link
Collaborator

Hi, are you referring to Figure 11 of results on synthesis noise with n=1? If so, this experiment indicates the results of using only 1 prior slice as input (while in the main paper, we usually used 3 prior slices,). This does not necessarily require T to be 1 as well. In fact, I don't think any unsupervised algorithms right now can handle T = 1.

@mariusarvinte
Copy link

mariusarvinte commented Apr 15, 2023

Hi, are you referring to Figure 11 of results on synthesis noise with n=1? If so, this experiment indicates the results of using only 1 prior slice as input (while in the main paper, we usually used 3 prior slices,). This does not necessarily require T to be 1 as well. In fact, I don't think any unsupervised algorithms right now can handle T = 1.

Thanks for the quick reply, sorry for being a bit vague at first.

Yes, I was talking about the result in Figure 11 and it seems I was mistaking n=1 for T=1 - but if I understand correctly, you just applied it to 2D data instead of 3D data, and still require multiple noisy observations of the same 2D clean sample?

My general understanding is that, for example, Noise2Self is designed to work with T=1 (a single noisy observation of each datapoint). Citing from the Introduction in Noise2Self (https://arxiv.org/pdf/1901.11365.pdf, Page 1):

In this paper, we propose a framework for blind denoising based on self-supervision. [...] 
This allows us to learn denoising functions from single noisy measurements of each object, with performance close to that of supervised methods.

I was wondering if your method/code would allow one to do the same.

@tiangexiang
Copy link
Collaborator

Oh, now I get what you mean! Yes, we do require multiple 2D observations of the same underlying 2D slice for unsupervised learning. The difference between Noise2Self and DDM2 is the definition and scope of data point: In Noise2Self, a data point is usually referred to as a single pixel, while in DDM2 a data point is actually a 2D slice. In this way, Noise2Self can achieve denoising on the 2D noisy image itself (since it contains many pixels), and of course, masking is required to make this strategy effective. DDM2, on the other hand, requires multiple 2D slices as inputs, and no masking is needed. Hope this clarifies :)

@chenyzzz
Copy link
Author

@tiangexiangHello! I trained the Stanford HARDI dataset according to the steps. The images generated after denoising in stage 3 looked good during the training, but the effect I got after using denoising.py was very strange. I don't know why. Did I do something wrong?
Actually, I don't quite understand the meaning of this passage. It is the fourth point about training configuration requirements: After Stage II finished, the state file (recorded in the previous step) needs to be specified at 'initial_stage_file' for both 'train' and 'val' in the 'datasets' section. Can you explain it again?
I am so sorry for my many questions. Thank you again! Best wishes!

@tiangexiang
Copy link
Collaborator

Hi! Sorry for the unclearness, after Stage II is finished, the generated '.txt' file should be specified at the 'stage2_file' variable in the config file, which is the last variable in the file. It shouldn't be specified at 'initial_stage_file' for both 'train' and 'val' in the 'datasets' section. Sorry this is an outdated statement and we will update it accordingly.

Note that the 'stage2_file' is needed for both Stage III training and denoising. And please make sure the trained model is loaded properly when denoising!

@gzliyu
Copy link

gzliyu commented Jun 18, 2023

Hi! Sorry for the unclearness, after Stage II is finished, the generated '.txt' file should be specified at the 'stage2_file' variable in the config file, which is the last variable in the file. It shouldn't be specified at 'initial_stage_file' for both 'train' and 'val' in the 'datasets' section. Sorry this is an outdated statement and we will update it accordingly.

Note that the 'stage2_file' is needed for both Stage III training and denoising. And please make sure the trained model is loaded properly when denoising!

A kind note to update "After Stage II finished, the state file (recorded in the previous step) needs to be specified at 'initial_stage_file'"

@gzliyu
Copy link

gzliyu commented Jun 23, 2023

Hi! Did you solve this problem, my inference on hardi150 looks weird like this
截屏2023-06-23 11 16 28
@tiangexiang @chenyzzz

@VGANGV
Copy link

VGANGV commented Jul 6, 2023

Hi! Did you solve this problem, my inference on hardi150 looks weird like this 截屏2023-06-23 11 16 28 @tiangexiang @chenyzzz

@gzliyu 我遇到了一样的问题,去噪结果同样非常奇怪。请问您解决了吗?

@tiangexiang
Copy link
Collaborator

tiangexiang commented Jul 6, 2023

@gzliyu @VGANGV Sorry I just saw these messages! I think one potential reason is from model loading (either in stage 3 training or inference). Did you specify the correct stage 3 model checkpoint before running inference? Can you please provide some validation results during the training process (for both stage 1 and 3)?

@VGANGV
Copy link

VGANGV commented Jul 6, 2023

@tiangexiang Thank you Tiange! I realized that I forgot to update the config file before denoising. After I changed the "resume_state" of "noise_model" in the config file to the Stage 3 model, I got the normal denoising result.

@BAOSONG1997
Copy link

you just applied it to 2D data instead of 3D data, and still require multiple noisy observations of the same 2D clean sample?

So do you mean T is the acquiring number of the same volume/phantom? We can acquire the image/slice more than once separately.
Do you think it is possible that T is the array coil number of the same volume? Thanks.

@tiangexiang
Copy link
Collaborator

@BAOSONG1997 yes! T here indicates the number of acquisitions for the same underlying 3D volume. I think it is possible that T is the array coil number of the same volume. As long as noise in each observation of 3D volume is i.i.d, I think DDM2 is able to handle :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants