You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently the ONNX backend in wasmtime-wasi-nn only uses the default CPU execution provider and ignores the ExecutionTarget requested by the WASM caller.
I would like to suggest adding support for additional execution providers (CUDA, TensorRT, ROCm, ...) to wasmtime-wasi-nn.
Benefit
Improved performance for WASM modules using the wasi-nn API.
Implementation
ort already has support for many execution providers, so integrating these into wasmtime-wasi-nn should not be to much work.
I would be interested in looking into this, however, I only really have the means to test the DirectML and NVIDIA CUDA / TensorRT EPs.
Alternatives
Leave it to the users to add support for additional execution providers.
The text was updated successfully, but these errors were encountered:
Feature
Currently the ONNX backend in
wasmtime-wasi-nn
only uses the default CPU execution provider and ignores theExecutionTarget
requested by the WASM caller.wasmtime/crates/wasi-nn/src/backend/onnxruntime.rs
Lines 21 to 33 in 24c1388
I would like to suggest adding support for additional execution providers (CUDA, TensorRT, ROCm, ...) to
wasmtime-wasi-nn
.Benefit
Improved performance for WASM modules using the
wasi-nn
API.Implementation
ort
already has support for many execution providers, so integrating these intowasmtime-wasi-nn
should not be to much work.I would be interested in looking into this, however, I only really have the means to test the DirectML and NVIDIA CUDA / TensorRT EPs.
Alternatives
Leave it to the users to add support for additional execution providers.
The text was updated successfully, but these errors were encountered: