Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Freezing layers - EfficientDet #87

Open
aritzLizoain opened this issue Dec 22, 2020 · 3 comments
Open

Freezing layers - EfficientDet #87

aritzLizoain opened this issue Dec 22, 2020 · 3 comments

Comments

@aritzLizoain
Copy link

Hi!

Is the load_pretrained_model_from feature performing transfer learning with frozen layers, or would this need to be implemented somewhere else?

I would like to freeze all layers but the last one and train using a pretrained model. How should I do this? Maybe something like the following, but I don't really know where to write it.

efficientdet = self.system_dict["local"]["model"]
for p in efficientdet.parameters(): p.requires_grad = False
for p in efficientdet[-1].parameters(): p.requires_grad = True

Thank you in advance!

@aritzLizoain
Copy link
Author

I wrote the following lines in train_detector.py in order to perform transfer learning:

  • in Set_Hyperparams: modified the optimizer
    self.system_dict["local"]["optimizer"] = torch.optim.Adam(filter(lambda x: x.requires_grad, self.system_dict["local"]["model"].parameters()), self.system_dict["params"]["lr"]);

  • in Train: freeze layers
    for name, child in self.system_dict["local"]["model"].module.named_children():
    if name in freeze: where freeze corresponds to the layers to be frozen, e.g. ['conv3', 'conv4', 'backbone_net']
    print(name + ': FROZEN')
    for param in child.parameters():
    param.requires_grad = False
    else:
    print(name + ': UNFROZEN')
    for param in child.parameters():
    param.requires_grad = True

However I'm still not sure whether this is the correct way to do it

@pratt3000
Copy link

pratt3000 commented Jul 25, 2021

if opt.head_only:
        def freeze_backbone(m):
            classname = m.__class__.__name__
            for ntl in ['EfficientNet', 'BiFPN']:
                if ntl in classname:
                    for param in m.parameters():
                        param.requires_grad = False

        model.apply(freeze_backbone)
        print('[Info] freezed backbone')

This is how I am doing it

@pratt3000
Copy link

Btw, I am trying to freeze half of the network but cant pinpoint the initial layers to freeze due to the architecture. Anyone got any idea about how that could be done?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants