Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to change normalization strategy and weight map in fine tuning? #40

Open
Wenliangwang opened this issue May 28, 2019 · 10 comments
Open
Assignees
Labels
active This ticket has pending action enhancement New feature or request

Comments

@Wenliangwang
Copy link

Hi,
Because the features of my images are diverse, so I am trying to subtract the mean and divide by sd of each image. How can I realize it in fine tuning?
For my images, foreground/background is about 0.003. To decrease the weight of background, what is the best value for Vbal? And how can I change it?
Thank you very much!

@ThorstenFalk
Copy link
Collaborator

ThorstenFalk commented Jun 3, 2019

The newest version of the plugin allows you to select the normalization mode in "U-Net->Utilities->Create New Model" (there you can also change the weights). If you want to use custom pre-normalization, select "No normalization", "Zero-Mean, unit standard deviation" is also available as normalization strategy.

Since v_bal comes into play in two places, once for tile selection and a second time in pixel weighting, I'd suggest to not increase the factor above sqrt(bg/fg ratio). In your case this would be sqrt(0.003) (around 1/18). There is a good reason to choose it closer to 1: The lower the balancing term the more biased the network will be towards foreground. If you had an arbitrary amount of training time, I would suggest to entirely avoid re-balancing, so that the network learns the correct foreground background bias.

@Wenliangwang
Copy link
Author

Thanks a lot.
I selected the normalization mode as "(2) --[mean stddev] ->[0, 1] per channel " in "U-Net->Utilities->Create New Model" and saved as a modeldef.h5 file. Then, I selected this modeldef in Finetuning (Job Manager) and pre-trained weight file (2d_cell_net_v0.modeldef.h5). But it shows no compatible pre-trained weights found.

With a different normalization strategy, how can I use the pre-trained weights?
Thank you very much.

@ThorstenFalk
Copy link
Collaborator

The modeldef.h5 file only contains architecture and hyperparameters for pre-processing and augmentation. The actual model weights are stored in a corresponding file ending in .caffemodel.h5. You have two options:

  1. You can try to use the 2d_cell_net_v0.caffemodel.h5 file as weights file, but since the model was trained with Min/Max normalization, results will be poor before finetuning
  2. You can train from scratch, for this just leave the weights file blank

@ThorstenFalk ThorstenFalk reopened this Jun 3, 2019
@Wenliangwang
Copy link
Author

  1. Sorry, I made a mistake. I do select 2d_cell_net_v0.caffemodel.h5 file as weights file, and the message "No compatible pre-trained weights found...." pop out. How can I fix it then?
  2. I will also try to train from scratch.
    Thank you.

@ThorstenFalk
Copy link
Collaborator

Maybe the model is indeed not 100% compatible then, but training from scratch makes sense in your case anyways.

@Wenliangwang
Copy link
Author

I found my IoU (intersection over union) was increasing in iteration 5000-10000, but F1(Segmentation) began to decrease. What does F1 calculate? Should I stop training before F1 decrease? I want to do a segmentation.

image
image
image

Thank you.

@ThorstenFalk
Copy link
Collaborator

There are two different modes of segmentation: Semantic segmentation whcih simply classifies each pixel as belonging to (any) object or to background or Instance segmentation in which the goal is to additionally tell different instances of foreground objects apart. IoU measures semantic segmentation quality, so an increase in IoU means the segmentations become finer and more accurate. F1 measures the ability to separate instances. It is the harmonic mean of precision (how many detections are true positives) and recall (How many objects are detected at all).

Since your IoU still increases and your validation loss still decreases I would continue training, although both scores indicate a rather good model already.

@Wenliangwang
Copy link
Author

Thank you!

@Wenliangwang
Copy link
Author

I find foregroundBackgroundRatio in .modeldef.h5 file(['unet_param']['oixelwise_loss_weights']['foregroundBckgroundRatio']). Vbal = foregroundBackgroundRatio, am I right? If I change the value of foregroundBackgroundRatio, Vbal will be changed?
Thank you.

@ThorstenFalk
Copy link
Collaborator

Correct.

@ThorstenFalk ThorstenFalk self-assigned this Jun 18, 2019
@ThorstenFalk ThorstenFalk added active This ticket has pending action enhancement New feature or request labels Jun 18, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
active This ticket has pending action enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants