You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The paper said that
NSGA-Net + macro search space takes 8 GPU-days
NSGA-Net (6 @ 560) + cutout takes 4 GPU-days
But I run this searching code on Tesla P100-PCIE-16GB, the results are different.
Macro takes 4 days
Micro takes 41 days(30 mins for one network, total 0.5hr * 40 * 50 / 24hr = 41.6days)
This results is far away from the paper. Can you explain?
Thanks!
The text was updated successfully, but these errors were encountered:
The discrepancy in search time is due to the default hyper-parameters provided is not consistent with those used in the paper. Please use the following setting to run the search on micro space.
A bit more details, we halved the number of offspring created in each generation in micro search space case to reduce the search cost from 8 to 4 days. In the same time, we adjust the initial channels and number of epochs to match architecture's training time during search between macro search space models and micro search space models. But the archive size (population size) is the same between two cases.
The paper said that
NSGA-Net + macro search space takes 8 GPU-days
NSGA-Net (6 @ 560) + cutout takes 4 GPU-days
But I run this searching code on Tesla P100-PCIE-16GB, the results are different.
Macro takes 4 days
Micro takes 41 days(30 mins for one network, total 0.5hr * 40 * 50 / 24hr = 41.6days)
This results is far away from the paper. Can you explain?
Thanks!
The text was updated successfully, but these errors were encountered: