Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why do deeper CNNs have better shift consistency? #52

Open
vadik6666 opened this issue Mar 9, 2023 · 2 comments
Open

Why do deeper CNNs have better shift consistency? #52

vadik6666 opened this issue Mar 9, 2023 · 2 comments

Comments

@vadik6666
Copy link

With baseline CNNs with no anti-aliasing, we see better shift consistency if we increase the CNN's depth, e.g. VGG11 -> VGG19, Resnet18 -> Resnet152. Why is that so?

@richzhang
Copy link
Contributor

Good question. Higher accuracy naturally lends itself to better shift consistency. A classifier that gets 100% accuracy will be consistent across shifts, even if has no shift-invariant inductive bias to begin with

@vadik6666
Copy link
Author

Thank you @richzhang for a quick response!

  1. Suppose we use an ensemble of very deep CNNs as a teacher and do knowledge distillation to a small student CNN e.g. MobileNetv2. All CNNs in here are not anti-aliased CNNs. Do you think such a student model will have shift consistency?
  2. Same situation as in (1) but CNNs in the ensemble are anti-aliased versions, but student CNN isn't (it is again default MobileNetv2). What do you think about shift consistency of a student model?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants