Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug report - [Onnx Runtime 1.16 incompatible] #376

Open
sorgfresser opened this issue Sep 21, 2023 · 2 comments
Open

Bug report - [Onnx Runtime 1.16 incompatible] #376

sorgfresser opened this issue Sep 21, 2023 · 2 comments
Assignees
Labels
bug Something isn't working v5 Useful information for V5 release

Comments

@sorgfresser
Copy link

馃悰 Bug

Onnxruntime version 1.16 has been released yesterday. If I use it to load silero-vad using onnx=True, i get

ValueError: This ORT build has ['AzureExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['AzureExecutionProvider', 'CPUExecutionProvider'], ...)

Oddly enough, it works if I downgrade to 1.15 even if it's telling me this has been a thing since ORT 1.9.

To Reproduce

Steps to reproduce the behavior:

pip install onnxruntime==1.16.0

    model, utils = torch.hub.load(repo_or_dir='snakers4/silero-vad',
                                  model="silero_vad",
                                  onnx=True,
                                  force_reload=False)

Full stack trace:

  File "/home/simon/PycharmProjects/ttsdata/src/vad.py", line 127, in yield_audio
    model, utils = torch.hub.load(repo_or_dir='snakers4/silero-vad',
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/simon/PycharmProjects/ttsdata/venv/lib/python3.11/site-packages/torch/hub.py", line 558, in load
    model = _load_local(repo_or_dir, model, *args, **kwargs)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/simon/PycharmProjects/ttsdata/venv/lib/python3.11/site-packages/torch/hub.py", line 587, in _load_local
    model = entry(*args, **kwargs)
            ^^^^^^^^^^^^^^^^^^^^^^
  File "/home/simon/.cache/torch/hub/snakers4_silero-vad_master/hubconf.py", line 44, in silero_vad
    model = OnnxWrapper(os.path.join(model_dir, 'silero_vad.onnx'), force_onnx_cpu)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/simon/.cache/torch/hub/snakers4_silero-vad_master/utils_vad.py", line 24, in __init__
    self.session = onnxruntime.InferenceSession(path, sess_options=opts)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/simon/PycharmProjects/ttsdata/venv/lib/python3.11/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 432, in __init__
    raise e
  File "/home/simon/PycharmProjects/ttsdata/venv/lib/python3.11/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "/home/simon/PycharmProjects/ttsdata/venv/lib/python3.11/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 451, in _create_inference_session
    raise ValueError(
ValueError: This ORT build has ['AzureExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['AzureExecutionProvider', 'CPUExecutionProvider'], ...)

Expected behavior

Environment

Collecting environment information...
PyTorch version: 2.0.1+cu117
Is debug build: False
CUDA used to build PyTorch: 11.7
ROCM used to build PyTorch: N/A

OS: Manjaro Linux (x86_64)
GCC version: (GCC) 13.2.1 20230801
Clang version: 16.0.6
CMake version: version 3.27.5
Libc version: glibc-2.38

Python version: 3.11.5 (main, Aug 28 2023, 20:02:58) [GCC 13.2.1 20230801] (64-bit runtime)
Python platform: Linux-6.1.51-1-MANJARO-x86_64-with-glibc2.38
Is CUDA available: True
CUDA runtime version: 12.2.91
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce GTX 1060 6GB
Nvidia driver version: 535.104.05
cuDNN version: Probably one of the following:
/usr/lib/libcudnn.so.8.9.2
/usr/lib/libcudnn_adv_infer.so.8.9.2
/usr/lib/libcudnn_adv_train.so.8.9.2
/usr/lib/libcudnn_cnn_infer.so.8.9.2
/usr/lib/libcudnn_cnn_train.so.8.9.2
/usr/lib/libcudnn_ops_infer.so.8.9.2
/usr/lib/libcudnn_ops_train.so.8.9.2
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architektur: x86_64
CPU Operationsmodus: 32-bit, 64-bit
Adressgr枚脽en: 48 bits physical, 48 bits virtual
Byte-Reihenfolge: Little Endian
CPU(s): 8
Liste der Online-CPU(s): 0-7
Anbieterkennung: AuthenticAMD
Modellname: AMD FX(tm)-8350 Eight-Core Processor
Prozessorfamilie: 21
Modell: 2
Thread(s) pro Kern: 2
Kern(e) pro Sockel: 4
Sockel: 1
Stepping: 0
脺bertaktung: aktiviert
Skalierung der CPU(s): 69%
Maximale Taktfrequenz der CPU: 4000,0000
Minimale Taktfrequenz der CPU: 1400,0000
BogoMIPS: 8002,06
Markierungen: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 popcnt aes xsave avx f16c lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs xop skinit wdt fma4 tce nodeid_msr tbm topoext perfctr_core perfctr_nb cpb hw_pstate ssbd ibpb vmmcall bmi1 arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold
Virtualisierung: AMD-V
L1d Cache: 128 KiB (8 Instanzen)
L1i Cache: 256 KiB (4 Instanzen)
L2 Cache: 8 MiB (4 Instanzen)
L3 Cache: 8 MiB (1 Instanz)
NUMA-Knoten: 1
NUMA-Knoten0 CPU(s): 0-7

Versions of relevant libraries:
[pip3] numpy==1.25.2
[pip3] pytorch-lightning==2.0.9
[pip3] pytorch-metric-learning==2.3.0
[pip3] torch==2.0.1
[pip3] torch-audiomentations==0.11.0
[pip3] torch-pitch-shift==1.2.4
[pip3] torchaudio==2.0.2
[pip3] torchmetrics==1.1.2
[pip3] triton==2.0.0
[conda] Could not collect

@sorgfresser sorgfresser added the bug Something isn't working label Sep 21, 2023
@snakers4 snakers4 added the v5 Useful information for V5 release label Dec 5, 2023
@snakers4
Copy link
Owner

snakers4 commented Dec 5, 2023

To be solved with a V5 release, most likely just by using latest ONNX compatibility level when exporting the model.

@ozancaglayan
Copy link

This is probably related to a bug in onnxruntime 1.16.0 which they fixed in 1.16.1. I'm using VAD with 1.16.1 without an issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working v5 Useful information for V5 release
Projects
None yet
Development

No branches or pull requests

3 participants