Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exporting model to TFlite fails with quantized engine FBGEMM is not supported #181

Open
sbocconi opened this issue Mar 13, 2024 · 4 comments

Comments

@sbocconi
Copy link

Describe the bug
Trying to export the model with config configs/pfld/pfld_mbv2n_112.py fails with RuntimeError: quantized engine FBGEMM is not supported
Environment
Environment you use when bug appears:

  1. Python version: 3.10
  2. PyTorch Version: torch==2.0.1
  3. MMCV Version: 2.0.1
  4. EdgeLab Version: na
  5. Code you run
    python3 tools/export.py configs/pfld/pfld_mbv2n_112.py work_dirs/pfld_mbv2n_112/epoch_1.pth --target tflite --cfg-options data_root=datasets/meter/
  6. The detailed error
Traceback (most recent call last):
  File "/Users/SB/Projects/Software/Zephyros/Courses/Microcontrollers/ModelAssistant/tools/export.py", line 509, in <module>
    main()
  File "/Users/SB/Projects/Software/Zephyros/Courses/Microcontrollers/ModelAssistant/tools/export.py", line 501, in main
    export_tflite(args, model, loader)
  File "/Users/SB/Projects/Software/Zephyros/Courses/Microcontrollers/ModelAssistant/tools/export.py", line 375, in export_tflite
    ptq_model = quantizer.quantize()
  File "/opt/homebrew/Caskroom/miniconda/base/envs/sscma/lib/python3.10/site-packages/tinynn/graph/quantization/quantizer.py", line 530, in quantize
    qat_model = self.prepare_qat(rewritten_graph, self.is_input_quantized, self.backend, self.fuse_only)
  File "/opt/homebrew/Caskroom/miniconda/base/envs/sscma/lib/python3.10/site-packages/tinynn/graph/quantization/quantizer.py", line 3664, in prepare_qat
    self.prepare_qat_prep(graph, is_input_quantized, backend)
  File "/opt/homebrew/Caskroom/miniconda/base/envs/sscma/lib/python3.10/site-packages/tinynn/graph/quantization/quantizer.py", line 714, in prepare_qat_prep
    self.prepare_qconfig(graph, backend)
  File "/opt/homebrew/Caskroom/miniconda/base/envs/sscma/lib/python3.10/site-packages/tinynn/graph/quantization/quantizer.py", line 3598, in prepare_qconfig
    torch.backends.quantized.engine = backend
  File "/opt/homebrew/Caskroom/miniconda/base/envs/sscma/lib/python3.10/site-packages/torch/backends/quantized/__init__.py", line 33, in __set__
    torch._C._set_qengine(_get_qengine_id(val))
RuntimeError: quantized engine FBGEMM is not supported

Additional context
Running on Mac M2, torch cpu-only, mmcv compiled from source

@MILK-BIOS
Copy link
Contributor

Hi @sbocconi! We wondered your torch installed from pip or conda? We strongly recommend use pip to install torch.

@sbocconi
Copy link
Author

sbocconi commented Mar 14, 2024

Hi @MILK-BIOS, the error is because on ARM architectures such as MacOS M2 FBGEMM is not supported, so apparently you need to use python tools/export.py --backend qnnpack.

BTW, I have used pip to install torch.

@MILK-BIOS
Copy link
Contributor

Oh, glad to see you have solved the question! We need to make our code more compatiable.

@sbocconi
Copy link
Author

Unfortunately the Mac M2 ARM is not well supported yet due to the fact that it is a new architecture. I had to do the following two changes to make it work:

  1. export OMP_NUM_THREADS=1 && python tools/train.py <params> Otherwise code hangs
  2. Change is_mps_available() in mmengine/device/utils.py to return always False otherwise I get the following error:
TypeError: Cannot convert a MPS Tensor to float64 dtype as the MPS framework doesn't support float64. Please use float32 instead.

Maybe you can mention this in the documentation?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants