You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
Trying to export the model with config configs/pfld/pfld_mbv2n_112.py fails with RuntimeError: quantized engine FBGEMM is not supported Environment
Environment you use when bug appears:
Python version: 3.10
PyTorch Version: torch==2.0.1
MMCV Version: 2.0.1
EdgeLab Version: na
Code you run python3 tools/export.py configs/pfld/pfld_mbv2n_112.py work_dirs/pfld_mbv2n_112/epoch_1.pth --target tflite --cfg-options data_root=datasets/meter/
The detailed error
Traceback (most recent call last):
File "/Users/SB/Projects/Software/Zephyros/Courses/Microcontrollers/ModelAssistant/tools/export.py", line 509, in <module>
main()
File "/Users/SB/Projects/Software/Zephyros/Courses/Microcontrollers/ModelAssistant/tools/export.py", line 501, in main
export_tflite(args, model, loader)
File "/Users/SB/Projects/Software/Zephyros/Courses/Microcontrollers/ModelAssistant/tools/export.py", line 375, in export_tflite
ptq_model = quantizer.quantize()
File "/opt/homebrew/Caskroom/miniconda/base/envs/sscma/lib/python3.10/site-packages/tinynn/graph/quantization/quantizer.py", line 530, in quantize
qat_model = self.prepare_qat(rewritten_graph, self.is_input_quantized, self.backend, self.fuse_only)
File "/opt/homebrew/Caskroom/miniconda/base/envs/sscma/lib/python3.10/site-packages/tinynn/graph/quantization/quantizer.py", line 3664, in prepare_qat
self.prepare_qat_prep(graph, is_input_quantized, backend)
File "/opt/homebrew/Caskroom/miniconda/base/envs/sscma/lib/python3.10/site-packages/tinynn/graph/quantization/quantizer.py", line 714, in prepare_qat_prep
self.prepare_qconfig(graph, backend)
File "/opt/homebrew/Caskroom/miniconda/base/envs/sscma/lib/python3.10/site-packages/tinynn/graph/quantization/quantizer.py", line 3598, in prepare_qconfig
torch.backends.quantized.engine = backend
File "/opt/homebrew/Caskroom/miniconda/base/envs/sscma/lib/python3.10/site-packages/torch/backends/quantized/__init__.py", line 33, in __set__
torch._C._set_qengine(_get_qengine_id(val))
RuntimeError: quantized engine FBGEMM is not supported
Additional context
Running on Mac M2, torch cpu-only, mmcv compiled from source
The text was updated successfully, but these errors were encountered:
Hi @MILK-BIOS, the error is because on ARM architectures such as MacOS M2 FBGEMM is not supported, so apparently you need to use python tools/export.py --backend qnnpack.
Unfortunately the Mac M2 ARM is not well supported yet due to the fact that it is a new architecture. I had to do the following two changes to make it work:
Describe the bug
Trying to export the model with config
configs/pfld/pfld_mbv2n_112.py
fails withRuntimeError: quantized engine FBGEMM is not supported
Environment
Environment you use when bug appears:
python3 tools/export.py configs/pfld/pfld_mbv2n_112.py work_dirs/pfld_mbv2n_112/epoch_1.pth --target tflite --cfg-options data_root=datasets/meter/
Additional context
Running on Mac M2, torch cpu-only, mmcv compiled from source
The text was updated successfully, but these errors were encountered: