Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

After searching, is the fine-tuning a necessary step? Why can't use the inherited weight directly? #7

Open
HeYDwane3 opened this issue Jan 19, 2021 · 2 comments

Comments

@HeYDwane3
Copy link

I noticed that, after searching, we got .config and .inherited file. For validation, we need .config and .init file. So the only way to get .init file is to fine-tune with the .inherited file?

The acc reported during search is predicted right? And if we want to get the real test results, we have to fine-tune 450 epochs, right? You name it fine-tune, but it looks more like a retraing process.

Since we already got the .inherited file, how can we use that directly to do a real testing on the Imagenet validation set?

@euminds
Copy link

euminds commented Feb 20, 2021

Did you encounter this kind of error again when searching? An error is reported when the configured ofa version is ofa 0.1.0-202012082159. Then try ofa 0.0.4-2012082155 but the same error still occurs.
Traceback (most recent call last):
File "msunas.py", line 8, in
from evaluator import OFAEvaluator, get_net_info
File "/data8T/nsganetv2-master/evaluator.py", line 8, in
from codebase.networks import NSGANetV2
File "/data8T/nsganetv2-master/codebase/networks/init.py", line 1, in
from ofa.imagenet_codebase.networks.proxyless_nets import ProxylessNASNets, proxyless_base, MobileNetV2
ModuleNotFoundError: No module named 'ofa.imagenet_codebase'

@vinh-cao
Copy link

vinh-cao commented Oct 6, 2022

I noticed that, after searching, we got .config and .inherited file. For validation, we need .config and .init file. So the only way to get .init file is to fine-tune with the .inherited file?

The acc reported during search is predicted right? And if we want to get the real test results, we have to fine-tune 450 epochs, right? You name it fine-tune, but it looks more like a retraing process.

Since we already got the .inherited file, how can we use that directly to do a real testing on the Imagenet validation set?

Quote from the paper:
"An alternative approach to solve the bi-level NAS problem, i.e., simultaneously optimizing the architecture and learn the optimal model weights."

After working with the source code and read the paper again, i feel like this repo is not the exact implementation of the paper. The trained weights of candidates are dropped after getting the KPI. The result is a list of architecture codes + its KPI. You need to retrain the candidate. Sure you can easily just save the weights of the candidate and continue the training, but does it make sense to update the weights of the supernet like gradient-based algorithms?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants