Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error while pre-processing #219

Open
cubedmeatgoeshere opened this issue Sep 22, 2023 · 5 comments · May be fixed by #177
Open

Error while pre-processing #219

cubedmeatgoeshere opened this issue Sep 22, 2023 · 5 comments · May be fixed by #177

Comments

@cubedmeatgoeshere
Copy link

cubedmeatgoeshere commented Sep 22, 2023

$ python3 -m piper_train.preprocess --language en-us --input-dir "/home/patrick/voicedata/wav/" --output-dir "/home/patrick/voicedata/model" --dataset-format ljspeech --single-speaker --sample-rate 44100 INFO:preprocess:Single speaker dataset INFO:preprocess:Wrote dataset config ERROR:preprocess:phonemize_batch_espeak Traceback (most recent call last): File "/home/patrick/piper/src/python/piper_train/preprocess.py", line 289, in phonemize_batch_espeak silence_detector = make_silence_detector() File "/home/patrick/piper/src/python/piper_train/norm_audio/__init__.py", line 18, in make_silence_detector return SileroVoiceActivityDetector(silence_model) File "/home/patrick/piper/src/python/piper_train/norm_audio/vad.py", line 17, in __init__ self.session = onnxruntime.InferenceSession(onnx_path) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 432, in __init__ raise e File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 451, in _create_inference_session raise ValueError( ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], ...) ERROR:preprocess:phonemize_batch_espeak Traceback (most recent call last): File "/home/patrick/piper/src/python/piper_train/preprocess.py", line 289, in phonemize_batch_espeak silence_detector = make_silence_detector() File "/home/patrick/piper/src/python/piper_train/norm_audio/__init__.py", line 18, in make_silence_detector return SileroVoiceActivityDetector(silence_model) File "/home/patrick/piper/src/python/piper_train/norm_audio/vad.py", line 17, in __init__ self.session = onnxruntime.InferenceSession(onnx_path) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 432, in __init__ raise e File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 451, in _create_inference_session raise ValueError( ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], ...) ERROR:preprocess:phonemize_batch_espeak Traceback (most recent call last): File "/home/patrick/piper/src/python/piper_train/preprocess.py", line 289, in phonemize_batch_espeak silence_detector = make_silence_detector() File "/home/patrick/piper/src/python/piper_train/norm_audio/__init__.py", line 18, in make_silence_detector return SileroVoiceActivityDetector(silence_model) File "/home/patrick/piper/src/python/piper_train/norm_audio/vad.py", line 17, in __init__ self.session = onnxruntime.InferenceSession(onnx_path) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 432, in __init__ raise e File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 451, in _create_inference_session raise ValueError( ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], ...) ERROR:preprocess:phonemize_batch_espeak Traceback (most recent call last): File "/home/patrick/piper/src/python/piper_train/preprocess.py", line 289, in phonemize_batch_espeak silence_detector = make_silence_detector() File "/home/patrick/piper/src/python/piper_train/norm_audio/__init__.py", line 18, in make_silence_detector return SileroVoiceActivityDetector(silence_model) File "/home/patrick/piper/src/python/piper_train/norm_audio/vad.py", line 17, in __init__ self.session = onnxruntime.InferenceSession(onnx_path) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 432, in __init__ raise e File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 451, in _create_inference_session raise ValueError( ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], ...) ERROR:preprocess:phonemize_batch_espeak Traceback (most recent call last): File "/home/patrick/piper/src/python/piper_train/preprocess.py", line 289, in phonemize_batch_espeak silence_detector = make_silence_detector() File "/home/patrick/piper/src/python/piper_train/norm_audio/__init__.py", line 18, in make_silence_detector return SileroVoiceActivityDetector(silence_model) File "/home/patrick/piper/src/python/piper_train/norm_audio/vad.py", line 17, in __init__ self.session = onnxruntime.InferenceSession(onnx_path) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 432, in __init__ raise e File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 451, in _create_inference_session raise ValueError( ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], ...) ERROR:preprocess:phonemize_batch_espeak Traceback (most recent call last): File "/home/patrick/piper/src/python/piper_train/preprocess.py", line 289, in phonemize_batch_espeak silence_detector = make_silence_detector() File "/home/patrick/piper/src/python/piper_train/norm_audio/__init__.py", line 18, in make_silence_detector return SileroVoiceActivityDetector(silence_model) File "/home/patrick/piper/src/python/piper_train/norm_audio/vad.py", line 17, in __init__ self.session = onnxruntime.InferenceSession(onnx_path) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 432, in __init__ raise e File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 451, in _create_inference_session raise ValueError( ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], ...) ERROR:preprocess:phonemize_batch_espeak Traceback (most recent call last): File "/home/patrick/piper/src/python/piper_train/preprocess.py", line 289, in phonemize_batch_espeak silence_detector = make_silence_detector() File "/home/patrick/piper/src/python/piper_train/norm_audio/__init__.py", line 18, in make_silence_detector return SileroVoiceActivityDetector(silence_model) File "/home/patrick/piper/src/python/piper_train/norm_audio/vad.py", line 17, in __init__ self.session = onnxruntime.InferenceSession(onnx_path) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 432, in __init__ raise e File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 451, in _create_inference_session raise ValueError( ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], ...) ERROR:preprocess:phonemize_batch_espeak Traceback (most recent call last): File "/home/patrick/piper/src/python/piper_train/preprocess.py", line 289, in phonemize_batch_espeak silence_detector = make_silence_detector() File "/home/patrick/piper/src/python/piper_train/norm_audio/__init__.py", line 18, in make_silence_detector return SileroVoiceActivityDetector(silence_model) File "/home/patrick/piper/src/python/piper_train/norm_audio/vad.py", line 17, in __init__ self.session = onnxruntime.InferenceSession(onnx_path) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 432, in __init__ raise e File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 451, in _create_inference_session raise ValueError( ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], ...) ERROR:preprocess:phonemize_batch_espeak Traceback (most recent call last): File "/home/patrick/piper/src/python/piper_train/preprocess.py", line 289, in phonemize_batch_espeak silence_detector = make_silence_detector() File "/home/patrick/piper/src/python/piper_train/norm_audio/__init__.py", line 18, in make_silence_detector return SileroVoiceActivityDetector(silence_model) File "/home/patrick/piper/src/python/piper_train/norm_audio/vad.py", line 17, in __init__ self.session = onnxruntime.InferenceSession(onnx_path) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 432, in __init__ raise e File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 451, in _create_inference_session raise ValueError( ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], ...) ERROR:preprocess:phonemize_batch_espeak Traceback (most recent call last): File "/home/patrick/piper/src/python/piper_train/preprocess.py", line 289, in phonemize_batch_espeak silence_detector = make_silence_detector() File "/home/patrick/piper/src/python/piper_train/norm_audio/__init__.py", line 18, in make_silence_detector return SileroVoiceActivityDetector(silence_model) File "/home/patrick/piper/src/python/piper_train/norm_audio/vad.py", line 17, in __init__ self.session = onnxruntime.InferenceSession(onnx_path) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 432, in __init__ raise e File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 451, in _create_inference_session raise ValueError( ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], ...) ERROR:preprocess:phonemize_batch_espeak Traceback (most recent call last): File "/home/patrick/piper/src/python/piper_train/preprocess.py", line 289, in phonemize_batch_espeak silence_detector = make_silence_detector() File "/home/patrick/piper/src/python/piper_train/norm_audio/__init__.py", line 18, in make_silence_detector return SileroVoiceActivityDetector(silence_model) File "/home/patrick/piper/src/python/piper_train/norm_audio/vad.py", line 17, in __init__ self.session = onnxruntime.InferenceSession(onnx_path) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 432, in __init__ raise e File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 451, in _create_inference_session raise ValueError( ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], ...) INFO:preprocess:Processing 100 utterance(s) with 12 worker(s) ERROR:preprocess:phonemize_batch_espeak Traceback (most recent call last): File "/home/patrick/piper/src/python/piper_train/preprocess.py", line 289, in phonemize_batch_espeak silence_detector = make_silence_detector() File "/home/patrick/piper/src/python/piper_train/norm_audio/__init__.py", line 18, in make_silence_detector return SileroVoiceActivityDetector(silence_model) File "/home/patrick/piper/src/python/piper_train/norm_audio/vad.py", line 17, in __init__ self.session = onnxruntime.InferenceSession(onnx_path) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 432, in __init__ raise e File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 419, in __init__ self._create_inference_session(providers, provider_options, disabled_optimizers) File "/home/patrick/piper/src/python/.venv/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 451, in _create_inference_session raise ValueError( ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider'], ...)

Sorry for the formatting, WSL shell doesn't copy well

@rmcpantoja
Copy link
Contributor

What onnxruntime version are you using?

@cubedmeatgoeshere
Copy link
Author

Version: 1.16.0

@cubedmeatgoeshere
Copy link
Author

I tried to replicate this on an actual ARM64 Ubuntu and got the same exact error message

@cubedmeatgoeshere
Copy link
Author

cubedmeatgoeshere commented Sep 22, 2023

Ok I tried pip3 install onnxruntime==1.15.1 and it seems to work now :) onnxruntime was just updated to 1.16 2 days ago

@rmcpantoja
Copy link
Contributor

Hi,
This has already been solved in #177 for onnxruntime 1.16. Now should work with newer onnxruntime.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants