Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Evaluation #21

Open
DayrisRM opened this issue Jan 27, 2024 · 5 comments
Open

Evaluation #21

DayrisRM opened this issue Jan 27, 2024 · 5 comments

Comments

@DayrisRM
Copy link

Hi @tiangexiang

I have a few questions about evaluation and metrics.
I run the denoise.py with the save option, but in the code I see:

        if args.save:
            dataset_opt['val_volume_idx'] = 32 # save only the 32th volume
            dataset_opt['val_slice_idx'] = 'all' #save all slices

After this step I am going to run the evaluation metrics, so my question is:
Is that value (val_volume_idx = 32) correct?

My second question is related to quantitative_metrics.ipynb:
Can I use quantitative_metrics.ipynb to evaluate the results obtained for PPMI or Sherbrooke datasets?

Thanks!

@tiangexiang
Copy link
Collaborator

Hi, val_volume_idx = 32 here is just a random choice from all possible volumes in the dataset. You should choose a volume accordingly if you need to denoise on a specific volume. From a non-medical perspective, I do think that all volumes are pretty much the same if they have the same b-values, so choose a random volume to denoise is fine.
In quantitative_metrics.ipynb, we have to identify regions for Corpus callosum and update the region parameter accordingly. This may rely on medical expertise for PPMI and Sherbrooke datasets.

@DayrisRM
Copy link
Author

Hi @tiangexiang, thanks for your help!

I have executed all steps (stage 1, 2, 3 and denoise.py) for HARDI150, but the results are not good.
I wonder if I made any mistakes in the process, so I have 2 questions regarding denoise.py:

1- Are you using the same set of data that you used in the previous stages (1, 2, 3)?
I have used the same dataset (HARDI150) for the whole process.

2- Before running denoise.py I have added the model of stage 3 previously trained in "noise_model" -> "resume_state".
Is this correct?

When running denoise.py for HARDI, my configuration looks like this:

 {
    "name": "hardi150",
    "phase": "train", // always set to train in the config
    "gpu_ids": [
        0
    ],
    "path": { //set the path
        "log": "logs",
        "tb_logger": "tb_logger",
        "results": "results",
        "checkpoint": "checkpoint",
        "resume_state": null // UPDATE THIS FOR RESUMING TRAINING
    },
    "datasets": {
        "train": {
            "name": "hardi",
            "dataroot": "/Hardi/noisy.nii.gz", 
            "valid_mask": [10,160],
            "phase": "train",
            "padding": 3,
            "val_volume_idx": 40, // the volume to visualize for validation
            "val_slice_idx": 40, // the slice to visualize for validation
            "batch_size": 32,
            "in_channel": 1,
            "num_workers": 0,
            "use_shuffle": true
        },
        "val": {
            "name": "hardi",
            "dataroot": "/Hardi/noisy.nii.gz",  
            "valid_mask": [10,160],
            "phase": "val",
            "padding": 3,
            "val_volume_idx": 40, // the volume to visualize for validation
            "val_slice_idx": 40, // the slice to visualize for validation
            "batch_size": 1,
            "in_channel": 1,
            "num_workers": 0
        }
    },
    "model": {
        "which_model_G": "mri",
        "finetune_norm": false,
        "drop_rate": 0.0,
        "unet": {
            "in_channel": 1,
            "out_channel": 1,
            "inner_channel": 32,
            "norm_groups": 32,
            "channel_multiplier": [
                1,
                2,
                4,
                8,
                8
            ],
            "attn_res": [
                16
            ],
            "res_blocks": 2,
            "dropout": 0.0,
            "version": "v1"
        },
        "beta_schedule": { // use munual beta_schedule for acceleration
            "train": {
                "schedule": "rev_warmup70",
                "n_timestep": 1000,
                "linear_start": 5e-5,
                "linear_end": 1e-2
            },
            "val": {
                "schedule": "rev_warmup70",
                "n_timestep": 1000,
                "linear_start": 5e-5,
                "linear_end": 1e-2
            }
        },
        "diffusion": {
            "image_size": 128,
            "channels": 3, //sample channel
            "conditional": true // not used for DDM2
        }
    },
    "train": {
        "n_iter": 100000, //150000,
        "val_freq": 1e3,
        "save_checkpoint_freq": 1e4,
        "print_freq": 1e2,
        "optimizer": {
            "type": "adam",
            "lr": 1e-4
        },
        "ema_scheduler": { // not used now
            "step_start_ema": 5000,
            "update_ema_every": 1,
            "ema_decay": 0.9999
        }
    },
    // for Phase1
    "noise_model": {
        "resume_state": "/experiments/hardi150_240127_125623/checkpoint/latest", 
        "drop_rate": 0.0,
        "unet": {
            "in_channel": 2,
            "out_channel": 1,
            "inner_channel": 32,
            "norm_groups": 32,
            "channel_multiplier": [
                1,
                2,
                4,
                8,
                8
            ],
            "attn_res": [
                16
            ],
            "res_blocks": 2,
            "dropout": 0.0,
            "version": "v1"
        },
        "beta_schedule": { // use munual beta_schedule for accelerationß
            "linear_start": 5e-5,
            "linear_end": 1e-2
        },
        "n_iter": 10000,
        "val_freq": 2e3,
        "save_checkpoint_freq": 1e4,
        "print_freq": 1e3,
        "optimizer": {
            "type": "adam",
            "lr": 1e-4
        }
    },
    "stage2_file": "/initial_stage_file.txt" 
}

@tiangexiang
Copy link
Collaborator

  1. Yes, we have to use the same data for both training and inference.
  2. Sorry that my memory is a bit blurry, but I think you should specify the path of phase3 trained model in "path" -> "resume_state" as well.
    Please give another try!

@DayrisRM
Copy link
Author

DayrisRM commented Feb 6, 2024

Hi @tiangexiang!
I gave it another go, and I am afraid I get the same results. Below are screenshots of
A) difference maps between noisy_data and my_denoised_data, and B) difference map between your denoised data and my_denoised_data (both using in theory the same procedure)

A
A-noise-diff-exp2

B
B-exp2-dif-paper

In A) you can already see some structure. But B) is much more telling, as you can clearly see the diffusion and WM anatomy.
Why could this be happening?

@DayrisRM
Copy link
Author

DayrisRM commented Feb 6, 2024

Another question is that in the different denoised dataset you provided, I could see some slice artifacts, especially in the coronal view (see screenshot below).
I have seen those also in my results. I guess this is produced by the slice denoising the network learns?
I tried to run the denoising by volumes to investigate this further but could not figure it out.
Could you kindly point me to what do I need to modify?

2-paper_denoised

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants