Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some doubts about the predictior #53

Open
qiuqi1207 opened this issue May 12, 2021 · 2 comments
Open

Some doubts about the predictior #53

qiuqi1207 opened this issue May 12, 2021 · 2 comments

Comments

@qiuqi1207
Copy link

I sample 800 net ids in ofa net, initial parameter is'https://hanlab.mit.edu/files/OnceForAll/ofa_nets/'
I want to use the subnets to test the predictor in ofa/tutorial/accuracy_predictor.py, the predictor load pretrained model
'https://hanlab.mit.edu/files/OnceForAll/tutorial/acc_predictor.pth'
the picture shows a strange result,just like this:
predicate
The red dot is my own predictor
Why aren't The blue points in a straight line?

@av-savchenko
Copy link

av-savchenko commented Aug 27, 2021

I have also observed similar behavior of default accuracy predictor, that predicts too large accuracy when compared to real accuracy computed on 50K validation images from ILSVRC2012. I cordially ask the authors to release the training code and details about training data for their accuracy predictor?
BTW, the paper contains only the following description of this process: "we randomly sample 16K sub-networks with different architectures and input image sizes, then measure their accuracy on 10K validation images sampled from the original training set". Does it mean that the AccPredictor was trained by computing the accuracy on the training images causing the predicted accuracy for validation part to be too high? However, when I tested the accpredictor on subset of the training set of ILSVRC2012, I see much higher accuracy (~90%), so that it still seems to be not the correct way to gather the dataset to train accuracy predictor.
Finally, I noticed that AccuracyPredictor classes have different implementations in https://github.com/mit-han-lab/once-for-all/blob/master/ofa/nas/accuracy_predictor/acc_predictor.py and https://github.com/mit-han-lab/once-for-all/blob/master/ofa/tutorial/accuracy_predictor.py. In particular, the former adds self.base_acc to the predicted accuracy. However, it seems that base_acc computed as "mean(Y_all)" is simply induces as a bias of the pre-trained model.

Please, explain the way how did you trained the default accuracy predictor

@ifed-ucsd
Copy link

Has this issue been resolved? I'm facing the same difficulty.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants