Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

reference code for the dataset generation method #53

Open
KeyKy opened this issue Feb 15, 2022 · 5 comments
Open

reference code for the dataset generation method #53

KeyKy opened this issue Feb 15, 2022 · 5 comments

Comments

@KeyKy
Copy link

KeyKy commented Feb 15, 2022

Hi, does the reference code of dataset generation method release?

@JiahangXu
Copy link
Collaborator

Hi, thanks for your interests to nn-Meter! We have the plan to release the code of data generation. Due to other features with higher priority, the dataset generation code is planed to be released in about May, 2022.

@XYAskWhy
Copy link

XYAskWhy commented Apr 14, 2022

Hi, thanks for your interests to nn-Meter! We have the plan to release the code of data generation. Due to other features with higher priority, the dataset generation code is planed to be released in about May, 2022.

Hi, I am customizing a new predictor to try nn-Meter out. To make things easier, I only changed to another Cortex cpu and still using the benchmark_model_cpu_v2.1 profiler you provided. Now I am at the final steps of Build Kernel Latency Predictor , and I realize that I won't be able test the accuray of the predictor. So I wonder how could I create my own benchmark datasets? Is it the dataset generation code you mentioned above? @JiahangXu

@JiahangXu
Copy link
Collaborator

That's right. To test the model level prediction accuracy, you should generate some sample based on different network architecture. We plan to release the code of data generation in about May. The idea of dataset generation is change configs, such as the output channel of every building block in Mobilenet, and the kernel size for dwconv operation, based on one fixed network architecture. After generating models in .pb format, the nn-meter ir graph could be generated by command nn-meter get_ir --tensorflow <pb-file> [--output <output-name>]. And the golden standard of latency could be obtained by profiling. If you are in urgent, you can follow the mentioned step to generation benchmark dataset. I'm sorry for the inconvenience.

@XYAskWhy
Copy link

XYAskWhy commented Apr 24, 2022

That's right. To test the model level prediction accuracy, you should generate some sample based on different network architecture. We plan to release the code of data generation in about May. The idea of dataset generation is change configs, such as the output channel of every building block in Mobilenet, and the kernel size for dwconv operation, based on one fixed network architecture. After generating models in .pb format, the nn-meter ir graph could be generated by command nn-meter get_ir --tensorflow <pb-file> [--output <output-name>]. And the golden standard of latency could be obtained by profiling. If you are in urgent, you can follow the mentioned step to generation benchmark dataset. I'm sorry for the inconvenience.

Hi there. I tried to profile models in pb_models.zip with benchmark_model_cpu_v2.1 on my CortexA73 device, but I can't convert the pb model to tflite format. The code I am using for example is:
import tensorflow as tf
tf_model_path = ".../mobilenetv2_0"
tflite_model_path = ".../mobilenetv2_0.tflite"
converter = tf.lite.TFLiteConverter.from_saved_model(tf_model_path)
tf_lite_model = converter.convert()
with open(tflite_model_path, 'wb') as f:
f.write(tflite_model_path)
and I always get a error as:
RuntimeError: MetaGraphDef associated with tags {'serve'} could not be found in SavedModel. To inspect available tag-sets in the SavedModel, please use the SavedModel CLI: saved_model_cli available_tags: [set()]
I also tried to convert an onnx model to pb then to tflite model, and It doesn't raise error any more, but the nchw to nhwc input format conversion leads to many Pad and Transpose ops in the output model, which significantly affects the latency profiled.
So while you are working hard on the dataset generation code, could you please open source the tflite models as you did with pb_models.zip and onnx_models.zip in advance? Or can you provide a practicable method for converting to tflite format from your already open-sourced pb/onnx model?
Besides, can I use the following command to profile a whole model, which is exactly the same as profiling a kernel model?
./benchmark_model_cpu_v2.1 --kernel_path=/data/tf_benchmark/kernel.cl --num_threads=1 --num_runs=50 --warmup_runs=10 --graph=/data/tf_benchmark/mobilenetv2_0.tflite
If not, could you kindly share the correct profile tool/method too?
Many thanks. @JiahangXu

@SakuraiYuuta
Copy link

SakuraiYuuta commented Aug 11, 2023

Did the dataset generation code release? I didn't find it. And could you please share the code to create the nasbench201 models? I can't create the nasbench201 models with tensorflow.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants