Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance metrics questions #2189

Closed
Peaceandmaths opened this issue May 15, 2024 · 5 comments
Closed

Performance metrics questions #2189

Peaceandmaths opened this issue May 15, 2024 · 5 comments
Assignees

Comments

@Peaceandmaths
Copy link

Peaceandmaths commented May 15, 2024

Dear nnunet team,

I am using nnunetv2, nnUNetv2_evaluate_folder function to evaluate predicitons.
Some questions about the metrics reported in validation summary.json and summary.json in the postprocessed predictions :

  1. I would like to implement more metrics calculated and reported in the validation and final test summary.json. How can I add Hausdorff Distance 95, False Positive Rate, Precision, Recall, Accuracy, False Negative Rate, True Negative Rate ?

  2. Is there a differnece between foreground mean and mean ? I usually get the same number for both
    image

  3. Is there a way to report results in the summary.json that are not only voxel-wise but also target-wise ( wrt to connected component ) ?

  4. I see that, but what is the total number of voxels ? Or rather, is there a way to implement TP,TN,FP, FN rates in stead of voxel numbers ?
    n_pred = fp + tp
    n_ref = fn + tp

Thank you for your support,
Katya

@Merom99
Copy link

Merom99 commented May 15, 2024

Hi @Peaceandmaths, under the evaluation folder, you will find the evaluator.py. Open it, and then you will find the class called Evaluator where you will see the default metrics and default advanced metrics, remove the hash # and it should work!

@Peaceandmaths
Copy link
Author

@Merom99 Thanks, I also thought so, but there's no such function in the nnunetv2. If I go to the evaluation folder in nnunetv2, this is what I see and the advanced metrics mentioned above are not defined there.
image

If I used nnunetv2 the whole time for training and predicitng, can I use the evaluator from nnunet (version1) to evaluate predictions using the old extensive code with all the metrics I need ?

@mehnaz1985
Copy link

@Peaceandmaths , could you please txt the command for evaluation here?

is this right?
nnUNetv2_evaluate_folder -ref Gt_folder_path -pred Prediction_folder_path -l 1

@Peaceandmaths
Copy link
Author

@mehnaz1985 My command :
nnUNetv2_evaluate_folder -djfile dataset.json_path -pfile nnUNet_results/Dataset{dataset_id}/nnUNetTrainer_nnUNetPlans__3d_fullres/plans.json --chill test_labels_path predicted_labels_path

@ykirchhoff
Copy link
Contributor

Hi @Peaceandmaths,

nnUNetv2 only contains code for evaluation of the metrics included in the summary.json files, so Dice and IoU. You can add you own metrics by adding them here, you can find some code for different metrics in nnUNetv1.

Best,
Yannick

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants