Skip to content

Releases: microsoft/onnxruntime

ONNX Runtime v1.14.1

02 Mar 19:43
c57cf37
Compare
Choose a tag to compare

This patch addresses packaging issues and bug fixes on top of v1.14.0:

  • Mac OS Python build for x86 arch (issue: #14663)
  • DirectML EP fixes: sequence ops (#14442), package naming to remove -dev suffix
  • CUDA12 build compatibility (#14659)
  • Performance regression fixes: IOBinding input (#14719), Transformer models (#14732, #14517, #14699)
  • ORT Training kernel fix (#14727)

Only select packages were published for this patch release; others can be found in the attachments below:

ONNX Runtime v1.14.0

11 Feb 01:03
6ccaedd
Compare
Choose a tag to compare

Announcements

  • Building ORT from source will require cmake version >=3.24 instead of >=3.18.

General

  • ONNX 1.13 support (opset 18)
  • Threading
    • ORT Threadpool is now NUMA aware (details)
    • New API to set thread affinity (details)
  • New custom operator APIs
    • Enables a custom operator to wrap an entire model that is meant to be inferenced with an external API or runtime.
    • Details and example
  • Multi-stream Execution Provider refactoring
    • Improves GPU utilization by putting parallel inference requests on different GPU streams. Updated for CUDA, TensorRT, and ROCM execution providers
    • Improves memory efficiency by enabling GPU memory reuse across different streams
    • Enables Execution Provider developer to customize its stream implementation by providing "Stream" interface in ExecutionProvider API
  • [Preview] Rust API for ORT - not part of release branch but available to build in main.

Performance

  • Support of quantization with AMX on Sapphire Rapids processors
  • CUDA EP performance improvements:
    • Improve performance of transformer models and decoding methods: beam search, greedy search, and topp sampling.
    • Stable Diffusion model optimizations
    • Change cudnn_conv_use_max_workspace default value to be 1
  • Performance improvements to GRU and Slice operators

Execution Providers

Mobile

  • Pre/Post processing
    • Support updating mobilenet and super resolution models to move the pre and post processing into the model, including usage of custom ops for conversion to/from jpg/png
    • [Coming soon] onnxruntime-extensions packages for Android and iOS with DecodeImage and EncodeImage custom ops
    • Updated the onnxruntime inference examples to demonstrate end-to-end usage with onnxruntime-extensions package
  • XNNPACK
    • Added support for additional commonly used operators
    • Add iOS build support
      • XNNPACK EP is now included in the onnxruntime-c iOS package
    • Added support for using the ORT allocator in XNNPACK kernels to minimize memory usage

Web

  • onnxruntime-extensions included in default ort-web build (NLP centric)
  • XNNPACK Gemm
  • Improved exception handling
  • New utility functions (experimental) to help with exchanging data between images and tensors.

Training

  • Performance optimizations and bug fixes for Hugging Face models (i.e. Xlnet and Bloom)
  • Stable diffusion optimizations for training, including support for Resize and InstanceNorm gradients and addition of ORT-enabled examples to the diffusers library
  • FP16 optimizer exposed in torch-ort (details)
  • Bug fixes for Hugging Face models

Known Issues

  • The Microsoft.ML.OnnxRuntime.DirectML package name includes -dev-* suffix. This is functionally equivalent to the release branch build, and a patch is in progress.

Contributions

Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:
snnn, skottmckay, edgchen1, hariharans29, tianleiwu, yufenglee, guoyu-wang, yuslepukhin, fs-eire, pranavsharma, iK1D, baijumeswani, tracysh, thiagocrepaldi, askhade, RyanUnderhill, wangyems, fdwr, RandySheriffH, jywu-msft, zhanghuanrong, smk2007, pengwa, liqunfu, shahasad, mszhanyi, SherlockNoMad, xadupre, jignparm, HectorSVC, ytaous, weixingzhang, stevenlix, tiagoshibata, faxu, wschin, souptc, ashbhandare, RandyShuai, chilo-ms, PeixuanZuo, cloudhan, dependabot[bot], jeffbloo, chenfucn, linkerzhang, duli2012, codemzs, oliviajain, natke, YUNQIUGUO, Craigacp, sumitsays, orilevari, BowenBao, yangchen-MS, hanbitmyths, satyajandhyala, MaajidKhan, smkarlap, sfatimar, jchen351, georgen117, wejoncy, PatriceVignola, adrianlizarraga, justinchuby, zhangxiang1993, gineshidalgo99, tlh20, xzhu1900, jeffdaily, suryasidd, yihonglyu, liuziyue, chentaMS, jcwchen, ybrnathan, ajindal1, zhijxu-MS, gramalingam, WilBrady, garymm, kkaranasos, ashari4, martinb35, AdamLouly, zhangyaobit, vvchernov, jingyanwangms, wenbingl, daquexian, sreekanth-yalachigere, NonStatic2014, [mayavijx](https://github.com/m...

Read more

ONNX Runtime v1.13.1

24 Oct 21:09
b353e0b
Compare
Choose a tag to compare

Announcements

  • Security issues addressed by this release
    1. A protobuf security issue CVE-2022-1941 that impact users who load ONNX models from untrusted sources, for example, a deep learning inference service which allows users to upload their models then runs the inferences in a shared environment.
    2. An ONNX security vulnerability that allows reading of tensor_data outside the model directory, which allows attackers to read or write arbitrary files on an affected system that loads ONNX models from untrusted sources. (#12915)
  • Deprecations
    • CUDA 10.x support at source code level
    • Windows 8.x support in Nuget/C API prebuilt binaries. Support for Windows 7+ Desktop versions (including Windows servers) will be retained by building ONNX Runtime from source.
    • NUPHAR EP code is removed
  • Dependency versioning updates
    • C++ 17 compiler is now required to build ORT from source. On Linux, GCC version >=7.0 is required.
    • Minimal numpy version bumped to 1.21.6 (from 1.21.0) for ONNX Runtime Python packages
    • Official ONNX Runtime GPU packages now require CUDA version >=11.6 instead of 11.4.

General

  • Expose all arena configs in Python API in an extensible way
  • Fix ARM64 NuGet packaging
  • Fix EP allocator setup issue affecting TVM EP

Performance

  • Transformers CUDA improvements
    • Quantization on GPU for BERT - notebook, documentation on QAT, transformer optimization toolchain and quantized kernels.
    • Add fused attention CUDA kernels for BERT.
    • Fuse Add (bias) and Transpose of Q/K/V into one kernel for Attention and LongformerAttention.
    • Reduce GEMM computation in LongformerAttention with a new weight format.
  • General quantization (tool and kernel)
    • Quantization debugging tool - identify sensitive node/layer from accuracy drop discrepancies
    • New quantize API based on QuantConfig
    • New quantized operators: SoftMax, Split, Where

Execution Providers

  • CUDA EP
    • Official ONNX Runtime GPU packages are now built with CUDA version 11.6 instead of 11.4, but should still be backwards compatible with 11.4
  • TensorRT EP
    • Build option to link against pre-built onnx-tensorrt parser; this enables potential "no-code" TensorRT minor version upgrades and can be used to build against TensorRT 8.5 EA
    • Improved nested control flow support
    • Improve HashId generation used for uniquely identifying TRT engines. Addresses issues such as TRT Engine Cache Regeneration Issue
    • TensorRT uint8 support
  • OpenVINO EP
    • OpenVINO version upgraded to 2022.2.0
    • Support for INT8 QDQ models from NNCF
    • Support for Intel 13th Gen Core Process (Raptor Lake)
    • Preview support for Intel discrete graphics cards Intel Data Center GPU Flex Series and Intel Arc GPU
    • Increased test coverage for GPU Plugin
  • SNPE EP
  • DirectML EP
  • [new] CANN EP - Initial integration of CANN EP contributed by Huawei to support Ascend 310 (#11477)

Mobile

  • EP infrastructure
    • Implemented support for additional EPs that use static kernels
      • Required for EPs like XNNPACK to be supported in minimal build
      • Removes need for kernel hashes to reduce maintenance overhead for developers
      • NOTE: ORT format models will need to be regenerated as the format change is NOT backwards compatible. We're replacing hashes for the CPU EP kernels with operator constraint information for operators used by the model so that we can match any static kernels available at runtime.
  • XNNPack
    • Added more kernels including QDQ format model support
      • AveragePool, Softmax,
      • QLinearConv, QLinearAveragePool, QLinearSoftmax
    • Added support for XNNPACK using threadpool
      • See documentation for recommendations on how to configure the XNNPACK threadpool
  • ORT format model peak memory usage

Web

  • Support for 4GB memory in webassembly
  • Upgraded emscripten to 3.1.19
  • Build from source support for onnxruntime-extensions and sentencepiece
  • Initial support for XNNPACK for optimizations for Wasm

Training

  • Training packages updated to CUDA version 11.6 and removed CUDA 10.2 and 11.3
  • Performance improvements via op fusions like BiasSoftmax and Dropout fusion, Gather to Split fusion etc targeting SOTA models
  • Added Aten support for GroupNorm, InstanceNormalization, Upsample nearest
  • Bug fix for SimplifiedLayerNorm, seg fault for alltoall

Contributions

Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:
snnn, baijumeswani#2baijumeswani, edgchen1, iK1D, skottmckay, cloudhan, tianleiwu, fs-eire, mszhanyi, WilBrady, hariharans29, chenfucn, fdwr, yuslepukhin, wejoncy, PeixuanZuo, pengwa, yufenglee, jchen351, justinchuby, dependabot[bot], RandySheriffH, sumitsays, wschin, wangyems, YUNQIUGUO, ytaous, pranavsharma, vvchernov, natke, Craigacp, RandyShuai, smk2007, zhangyaobit, jcwchen, yihonglyu, georgen117, chilo-ms, ashbhandare, faxu, jstoecker, gramalingam, garymm, jeffbloo, xadupre, jywu-msft, askhade, RyanUnderhill, thiagocrepaldi, mindest, jingyanwangms, wenbingl, ashari4, sfatimar, MaajidKhan, souptc, HectorSVC, weixingzhang, zhanghuanrong

ONNX Runtime v1.12.1

04 Aug 22:07
7048164
Compare
Choose a tag to compare

This patch addresses packaging issues and bug fixes on top of v1.12.0.

  • Java package: MacOS M1 support folder structure fix
  • Android package: enable optimizations
  • GPU (TensorRT provider): bug fixes
  • DirectML: package fix
  • WinML: bug fixes

See #12418 for full list of specific fixes included

ONNX Runtime v1.12.0

22 Jul 04:43
f466364
Compare
Choose a tag to compare

Announcements

  • For Execution Provider maintainers/owners: the lightweight compile API is now the default compiler API for all Execution Providers (this was previously only available for the mobile build). If you have an EP using the legacy compiler API, please migrate to the lightweight compile API as soon as possible. The legacy API will be deprecated in next release (ORT 1.13).
  • netstandard1.1 support is being deprecated in this release and will be removed in the next ORT 1.13 release

Key Updates

General

  • ONNX spec support
    • onnx opset 17
    • onnx-ml opset 3 (TreeEnsemble update)
  • BeamSearch operator for encoder-decoder transformers models
  • Support for invoking individual ops without the need to create a separate graph
    • For use with custom op development to reuse ORT code
  • Support for feeding external initializers (for large models) as byte arrays for model inferencing
  • Build switch to disable usage of abseil library to remove dependency

Packages

  • Python 3.10 support
  • Mac M1 support in Python and Java packages
  • .NET 6/MAUI support in Nuget C# package
    • Additional target frameworks: net6.0, net6.0-android, net6.0-ios, net6.0-macos
    • NOTE: netstandard1.1 support is being deprecated in this release and will be removed in the 1.13 release
  • onnxruntime-openvino package available on Pypi (from Intel)

Performance and Quantization

  • Improved C++ APIs that now utilize RAII for better memory management
  • Operator performance optimizations, including GatherElements
  • Memory optimizations to support compute-intensive real-time inferencing scenarios (e.g. audio inferencing scenarios)
    • CPU usage savings for infrequent inference requests by reducing thread spinning
    • Memory usage reduction through use of containers from the abseil library, especially inlined vectors used to store tensor shapes and inlined hash maps
  • New quantized kernels for weight symmetry to improve performance on ARM64 little core (GEMM and Conv)
  • Specialized kernel to improve performance of quantized Resize by up to 2x speedup
  • Improved the thread job partition for QLinearConv, demonstrating up to ~20% perf gain for certain models
  • Quantization tool: improved ONNX shape inference for large models

Execution Providers

  • TensorRT EP
    • TensorRT 8.4 support
    • Provide option to share execution context memory between TensorRT subgraphs
    • Workaround long CI test time caused by frequent initialization/de-initialization of TensorRT builder
    • Improve subgraph partitioning and consolidate TensorRT subgraphs when possible
    • Refactor engine cache serialization/deserialization logic
    • Miscellaneous bug fixes and performance improvements
  • OpenVINO EP
    • Pre-Built ONNXRuntime binaries with OpenVINO now available on pypi: onnxruntime-openvino
    • Performance optimizations of existing supported models
    • New runtime configuration option ‘enable_dynamic_shapes’ added to enable dynamic shapes for each iteration
    • ORTModule included as part of OVEP Python Package to enable Torch ORT Inference
  • DirectML EP
  • TVM EP - details
    • Updated to add model .dll ingestion and execution on Windows
    • Updated documentation and CI tests
  • [New] SNPE EP - details
  • [Preview] XNNPACK EP - initial infrastructure with limited operator support, for use with ORT Mobile and ORT Web
    • Currently supports Conv and MaxPool, with work in progress to add more kernels

Mobile

  • Binary size reductions in Android minimal build - 12% reduction in size of base build with no operator kernels
  • Added new operator support to NNAPI and CoreML EPs to improve ability to run super resolution and BERT models using NPU
    • NNAPI: DepthToSpace, PRelu, Gather, Unsqueeze, Pad
    • CoreML: DepthToSpace, PRelu
  • Added Docker file to simplify running a custom minimal build to create an ORT Android package
  • Initial XNNPACK EP compatibility

Web

  • Memory usage optimizations
  • Initial XNNPACK EP compatibility

ORT Training

  • [New] ORT Training acceleration is also natively available through HuggingFace Optimum
  • [New] FusedAdam Optimizer now available through the torch-ort package for easier training integration
  • FP16_Optimizer Support for more DeepSpeed Versions
  • Bfloat16 support for AtenOp
  • Added gradient ops for ReduceMax and ReduceMin
  • Updates to Min and Max grad ops to use distributed logic
  • Optimizations
    • Optimized perf for Gelu and GeluGrad kernels for mixed precision models
    • Enabled fusions for SimplifiedLayerNorm
    • Added bitmask versions of Dropout, BiasDropout and DropoutGrad which brings ~8x space savings for the mast output.

Known issues


Contributions

Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:
snnn, edgchen1, fdwr, skottmckay, iK1D, fs-eire, mszhanyi, WilBrady, justinchuby, tianleiwu, PeixuanZuo, garymm, yufenglee, adrianlizarraga, yuslepukhin, dependabot[bot], chilo-ms, vvchernov, oliviajain, ytaous, hariharans29, sumitsays, wangyems, pengwa, baijumeswani, smk2007, RandySheriffH, gramalingam, xadupre, yihonglyu, zhangyaobit, YUNQIUGUO, jcwchen, chenfucn, souptc, chandru-r, jstoecker, hanbitmyths, RyanUnderhill, georgen117, jywu-msft, mindest, sfatimar, HectorSVC, Craigacp, jeffdaily, zhijxu-MS, natke, stevenlix, jeffbloo, guoyu-wang, daquexian, faxu, jingyanwangms, adtsai, wschin, weixingzhang, wenbingl, MaajidKhan, ashbhandare, ajindal1, zhanghuanrong, tiagoshibata, askhade, liqunfu

ONNX Runtime v1.11.1

27 Apr 17:16
366f4eb
Compare
Choose a tag to compare

This is a patch release on 1.11.0 with the following fixes:

  • Symbolic shape infer error (#10674)
  • Quantization tool bug (#10940)
  • Adds missing numpy type when looking for the ort correspondance (#10943)
  • Profiling tool JSON format bug (#11046)
  • Function bug fix (#11148)
  • Add mobile helpers to Python build (#11196)
  • Scoped GIL release in run_with_iobinding (#11248)
  • Fix output type mapping for JS (#11049)

All official packages are attached, and Python packages are additionally published to PyPi.

ONNX Runtime v1.11.0

26 Mar 16:06
b713855
Compare
Choose a tag to compare

Key Updates

General

  • Support for ONNX 1.11 with opset 16
  • Updated protobuf version to 3.18.x
  • Enable usage of Mimalloc (details)
  • Transformer model helper scripts
  • On Windows, error strings in OrtStatus are now encoded in UTF-8. When you need to print it out to screen, first convert it to a wide char string by using the MultiByteToWideChar Windows API.

Performance

  • Memory utilization related performance improvements (e.g. elimination of vectors for small dims)
  • Performance variance stability improvement through dynamic cost model session option (details)
  • New quantization data format support: S8S8 in QDQ format
    • Added s8s8 kernels for ARM64
    • Support to convert s8s8 to u8s8 automatically for x64
  • Improved performance on ARM64 for quantized CNN model through:
    • New kernels for quantized depthwise Conv
    • Improved symmetrically quantized Conv by leveraging indirect buffer
    • New Gemm kernels for symmetric quantized Conv and MatMul
  • General quantization improvements, including new quantized operators (Resize, ArgMax) and quantization tool updates

API

  • Java: Only a single OrtEnv can be created in any given execution of the JVM. Previously, the environment could be closed completely and a fresh one could be created with different parameters (e.g. global thread pool, or logging level) (details)

Packages

  • Nuget packages
    • C# packages now tested with .NET 5. .NET Core 2.1 support is deprecated as it has reached end of life support on August 21, 2021. We will closely follow .NET's support policy
    • Removed PDB files. These are attached as release artifacts below.
  • Pypi packages
    • Python 3.6 is deprecated as it has reached EOL December 2021. Supported Python versions: 3.7-3.9
    • Note: Mac M1 builds are not yet available in pypi but can be built from source
    • OnnxRuntime with OpenVINO support available at https://pypi.org/project/onnxruntime-openvino/1.11.0/

Execution Providers

  • CUDA
    • Enable CUDA provider option configuration for C# to support workspace size configuration from and fix binary compatibility of CUDAProviderOptions C API
    • Preview support for CUDA Graphs (details)
  • TensorRT
    • TRT 8.2.3 support
    • Memory footprint optimizations
    • Support protobuf >= 3.11
    • Updated flatbuffers version to 2.0
    • Misc Bug Fixes
  • DirectML
    • Updated more operators to opset 13 (QuantizeLinear, DequantizeLinear, ReduceSum, Split, Squeeze, Unsqueeze, ReduceSum).
  • OpenVINO
  • OpenCL (in preview)
    • Introduced the EP for OpenCL to use with Mobile GPUs
    • Available in experimental/opencl branch for users to try. Provide feedback through Issues and Discussions in the repo.
    • README is available here.

Mobile

  • Added general support for converting a model to NHWC layout at runtime
    • Execution provider sets preferred layout and shared infrastructure in ORT will ensure the nodes the execution provider is assigned will be in that layout
  • Added support for runtime optimization with minimal binary size impact
    • Relevant optimizations are saved in the ORT format model for replay at runtime if applicable
  • Added support for QDQ format models to the NNAPI EP
    • Will fall back to CPU EP’s QDQ handling if NNAPI is not available using runtime optimizations
    • Includes updates to the ORT QDQ optimizers so they work better with mobile scenarios
  • Added helpers to:
    • Analyze if a model can be used with the pre-built ORT Mobile package
    • Update ONNX opset so model can be used with the pre-built package
    • Convert dynamic inputs into fixed size inputs so that the model can be used with NNAPI/CoreML
    • Optimize a QDQ format model for use with ORT
  • Added Android and iOS packages with full ORT builds
    • These packages have additional support for the full set of opsets and ops for ONNX models at the cost of a larger binary size.

Web

  • Build option to create ONNX Runtime WebAssembly static library
  • Support for concurrent creation of multiple inference sessions
  • Upgraded emsdk version to 3.1.3 for more stable multi-threads and enables LTO with multi-threads build on WebAssembly.

Known issues

  • When using tensor sequences/sparse tensors, the generated profile is not valid JSON. (Fixed in #10974)
  • There is a bug in the quantization tool for calibration when choosing percentile algorithm (Fixed in #10940). To fix this, please apply the typo fix in the python file.
  • Mac M

Contributions

Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:
snnn, edgchen1, skottmckay, yufenglee, wangyems, yuslepukhin, gwang-msft, iK1D, chilo-ms, fdwr, ytaous, RandySheriffH, hanbitmyths, chenfucn, yihonglyu, ajindal1, fs-eire, souptc, tianleiwu, YUNQIUGUO, hariharans29, oliviajain, xadupre, ashari4, RyanUnderhill, jywu-msft, weixingzhang, baijumeswani, georgen117, natke, Craigacp, jeffdaily, JingqiaoFu, zhanghuanrong, satyajandhyala, smk2007, ryanlai2, askhade, thiagocrepaldi, jingyanwangms, pengwa, scxiao, ashbhandare, BowenBao, SherlockNoMad, sumitsays, sfatimar, mosdav, harshithapv, liqunfu, tiagoshibata, gineshidalgo99, pranavsharma, jcwchen, nkreeger, xkszltl, faxu, suffiank, stevenlix, jeffbloo, feihugis

ONNX Runtime v1.10.0

08 Dec 00:22
0d9030e
Compare
Choose a tag to compare

Announcements

  • As noted in the deprecation notice in ORT 1.9, InferenceSession now requires the providers parameters to be set when enabling Execution Providers other than default CPUExecutionProvider.
    e.g. InferenceSession('model.onnx', providers=['CUDAExecutionProvider'])
  • Python 3.6 support removed for Mac builds. Since 3.6 is end-of-life in December 2021, it will no longer be supported from next release (ORT 1.11) onwards
  • Removed dependency on optional-lite
  • Removed experimental Featurizers code

General

  • Support for plug-in custom thread creation and join functions to enable usage of external threads
  • Optional type support from op set 15

Performance

  • Introduced indirect Convolution method for QLinearConv which has symmetrically quantized filter, i.e., filter type is int8 and zero point of filter is 0. The method leverages in-direct buffer instead of memcpy'ing the original data and doesn’t need to compute the sum of each pixel of output image for quantized Conv.
    • X64: new kernels - including avx2, avxvnni, avx512 and avx 512 vnni, for general and depthwise quantized Conv.
    • ARM64: new kernels for depthwise quantized Conv.
  • Tensor shape optimization to avoid allocating heap memory in most cases - #9542
  • Added transpose optimizer to push and cancel transpose ops, significantly improving perf for models requiring layout transformation

API

  • Python
    • Following through on the deprecation notice in ORT 1.9, InferenceSession now requires the providers parameters to be set when enabling Execution Providers other than default CPUExecutionProvider.
      e.g. InferenceSession('model.onnx', providers=['CUDAExecutionProvider'])
  • C/C++
    • New API to query CUDA stream to launch a custom kernel for scenarios where custom ops compiled into shared libraries need implicit synchronization with ORT CUDA kernels - #9141
    • Updated Invalid -> OrtInvalidAllocator
    • Updated every item in OrtCudnnConvAlgoSearch to a safer global name
  • WinML
    • New APIs to create OrtValues from Windows platform specific ID3D12Resources by exposing DirectML Execution Provider specific APIs. These APIs allow DML to extend the C-API and provide EP specific extensions.
      • OrtSessionOptionsAppendExecutionProviderEx_DML
      • DmlCreateGPUAllocationFromD3DResource
      • DmlFreeGPUAllocation
      • DmlGetD3D12ResourceFromAllocation
    • Bug fix: LearningModel::LoadFromFilePath in UWP apps

Packages

  • Added Mac M1 Universal2 build support for a single binary that runs natively on both Apple silicon and Intel-based Macs. These are included in the official Nuget packages. (build instructions)
  • Windows C API Symbols are now uploaded to Microsoft symbol server
  • Nuget package now supports ARM64 Linux C#
  • Python GPU package now includes both TensorRT and CUDA EPs. Note: EPs need to be explicitly registered to ensure the correct provider is used. e.g. InferenceSession('model.onnx', providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider']). Please also ensure you have appropriate TensorRT dependencies and CUDA dependencies installed.

Execution Providers

  • TensorRT EP
    • Python GPU release packages now include support for TensorRT 8.0. Enable TensorrtExecutionProvider by explicitly setting providers parameter when creating an InferenceSession. e.g. InferenceSession('model.onnx', providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider'])
    • Published quantized BERT model example
  • OpenVINO EP
    • Add support for OpenVINO 2021.4.x
    • Auto Plugin support
    • IO Buffer/Copy Avoidance Optimizations for GPU plugin
    • Misc fixes
  • DNNL EP
    • Add Softmaxgrad op
    • Add Transpose, Reshape, Pow and LeakyRelu ops
    • Add DynamicQuantizeLinear op
    • Add squeeze/unsqueeze ops
  • DirectML EP
    • Update DirectML.dll from 1.5.1 to 1.8.0
    • Support full precision uint64/int64 for 48 operators
    • Add 8D for 7 more existing operators
    • Add DynamicQuantizeLinear op
    • Accept ID3DResource's via C API

Mobile

  • Added Xamarin support to the ORT C# Nuget packages
    • Updated target frameworks in native package
    • iOS and Android binaries now included in native package
  • ORT format models now have backwards compatibility guarantee

Web

  • Support WebAssembly SIMD for qgemm kernel to accelerate the performance of quantized models
  • Upgraded existing WebGL kernels to the latest opset
  • Optimized bundle size to support various production scenarios, such as WebAssembly only or WebGL only

Contributions

Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:
snnn, gineshidalgo99, fs-eire, gwang-msft, edgchen1, hariharans29, skottmckay, jeffdaily, baijumeswani, fdwr, smk2007, suffiank, souptc, RyanUnderhill, iK1D, yuslepukhin, chilo-ms, satyajandhyala, hanbitmyths, thiagocrepaldi, wschin, tianleiwu, pengwa, xadupre, zhanghuanrong, SherlockNoMad, wangyems, RandySheriffH, ashbhandare, tiagoshibata, yufenglee, mindest, sumitsays, MaajidKhan, gramalingam, tracysh, georgen117, jywu-msft, sfatimar, martinb35, nkreeger, ytaous, ashari4, stevenlix, chandru-r, jingyanwangms, mosdav, raviskolli, faxu, liqunfu, kit1980, weixingzhang, pranavsharma, jcwchen, chenfucn, BowenBao, jeffbloo

ONNX Runtime v1.9.1

05 Oct 00:13
2a96b73
Compare
Choose a tag to compare

This is a patch release on 1.9.0 with the following fixes:

  • Microsoft.AI.MachineLearning NuGet Package Fixes
    • Bug fix for the issue that fails GPU execution if the executable is on the path that contained the unicode characters - 9229.
    • Bug fix for the NuGet package to be installed on UWP apps with 1.9 - 9182.
  • Bug fix for OpenVino EP Python API- 9166.
  • Bump up TVM version for NUPHAR EP - 9159.
  • Fixed build issue for iOS 11 and earlier versions - 9036.

ONNX Runtime v1.9.0

23 Sep 02:05
4daa14b
Compare
Choose a tag to compare

Announcements

  • GCC version < 7 is no longer supported
  • CMAKE_SYSTEM_PROCESSOR needs be set when cross-compiling on Linux because pytorch cpuinfo was introduced as a dependency for ARM big.LITTLE support. Set it to the value of uname -m output of your target device.

General

  • ONNX 1.10 support
    • opset 15
    • ONNX IR 8 (SparseTensor type, model local functionprotos, Optional type not yet fully supported this release)
  • Improved documentation of C/C++ APIs
  • IBM Power support
  • WinML - DLL dependency fix supports learning models on Windows 8.1
  • Support for sub-building onnxruntime-extensions and statically linking into onnxruntime binary for custom builds
    • Add --_use_extensions option to run models with custom operators implemented in onnxruntime-extensions

APIs

  • Registration of a custom allocator for sharing between multiple sessions. (See RegisterAllocator and UnregisterAllocator APIs in onnxruntime_c_api.h)
  • SessionOptionsAppendExecutionProvider_TensorRT API is deprecated; use SessionOptionsAppendExecutionProvider_TensorRT_V2
  • New APIs: SessionOptionsAppendExecutionProvider_TensorRT_V2, CreateTensorRTProviderOptions, UpdateTensorRTProviderOptions, GetTensorRTProviderOptionsAsString, ReleaseTensorRTProviderOptions, EnableOrtCustomOps, RegisterAllocator, UnregisterAllocator, IsSparseTensor, CreateSparseTensorAsOrtValue, FillSparseTensorCoo, FillSparseTensorCsr, FillSparseTensorBlockSparse, CreateSparseTensorWithValuesAsOrtValue, UseCooIndices, UseCsrIndices, UseBlockSparseIndices, GetSparseTensorFormat, GetSparseTensorValuesTypeAndShape, GetSparseTensorValues, GetSparseTensorIndicesTypeShape, GetSparseTensorIndices,

Performance and quantization

  • Performance improvement on ARM
    • Added S8S8 (signed int8, signed int8) matmul kernel. This avoids extending uin8 to int16 for better performance on ARM64 without dot-product instruction
    • Expanded GEMM udot kernel to 8x8 accumulator
    • Added sgemm and qgemm optimized kernels for ARM64EC
  • Operator improvements
    • Improved performance for quantized operators: DynamicQuantizeLSTM, QLinearAvgPool
    • Added new quantized operator QGemm for quantizing Gemm directly
    • Fused HardSigmoid and Conv
  • Quantization tool - subgraph support
  • Transformers tool improvements
    • Fused Attention for BART encoder and Megatron GPT-2
    • Integrated mixed precision ONNX conversion and parity test for GPT-2
    • Updated graph fusion for embed layer normalization for BERT
    • Improved symbolic shape inference for operators: Attention, EmbedLayerNormalization, Einsum and Reciprocal

Packages

  • Official ORT GPU packages (except Python) now include both CUDA and TensorRT Execution Providers.
    • Python packages will be updated next release. Please note that EPs should be explicitly registered to ensure the correct provider is used.
  • GPU packages are built with CUDA 11.4 and should be compatible with 11.x on systems with the minimum required driver version. See: CUDA minor version compatibility
  • Pypi
    • ORT + DirectML Python packages now available: onnxruntime-directml
    • GPU package can be used on both CPU-only and GPU machines
  • Nuget
    • C#: Added support for using netstandard2.0 as a target framework
    • Windows symbol (PDB) files are no longer included in the Nuget package, reducing size of the binary Nuget package by 85%. To download, please see the artifacts below in Github.

Execution Providers

  • CUDA EP

    • Framework improvements that boost CUDA performance of subgraph heavy models (#8642, #8702)
    • Support for sequence ops for improved performance for models using sequence type
    • Kernel perf improvements for Pad and Upsample (up to 4.5x faster)
  • TensorRT EP

    • Added support for TensorRT 8.0 (x64 Windows/Linux, ARM Jetson), which includes new TensorRT explicit-quantization features (ONNX Q/DQ support)
    • General fixes and quality improvements
  • OpenVINO EP

    • Added support for OpenVINO 2021.4
  • DirectML EP

    • Bug fix for Identity with non-float inputs affecting DynamicQuantizeLinear ONNX backend test

ORT Web

  • WebAssembly
    • SIMD (Single Instruction, Multiple Data) support
    • Option to load WebAssembly from worker thread to avoid blocking main UI thread
    • wasm file path override
  • WebGL
    • Simpler workflow for WebGL kernel implementation
    • Improved performance with Conv kernel enhancement

ORT Mobile

  • Added more example mobile apps
  • CoreML and NNAPI EP enhancements
  • Reduced peak memory usage when initializing session with ORT format model as bytes
  • Enhanced partitioning to improve performance when using NNAPI and CoreML
    • Reduce number of NNAPI/CoreML partitions required
    • Add ability to force usage of CPU for post-processing in SSD models
      • Improves performance by avoiding expensive device copy to/from NPU for cheap post-processing section of the model
  • Changed to using xcframework in the iOS package
    • Supports usage of arm64 iPhone simulator on Mac with Apple silicon

ORT Training

  • Expanding input formats supported to include dictionaries and lists.
  • Enable user defined autograd functions
  • Support for fallback to PyTorch for execution
  • Added support for deterministic compute to enable reproducibility with ORTModule
  • Add DebugOptions and LogLevels to ORTModule API* to improve debuggability
  • Improvements additions to kernels/gradients: Concat, Split, MatMul, ReluGrad, PadOp, Tile, BatchNormInternal
  • Support for ROCm 4.3.1 on AMD GPU

Contributions

Contributors to ONNX Runtime include members across teams at Microsoft, along with our community members:
edgchen1, gwang-msft, tianleiwu, fs-eire, hariharans29, skottmckay, baijumeswani, RyanUnderhill, iK1D, souptc, nkreeger, liqunfu, pengwa, SherlockNoMad, wangyems, chilo-ms, thiagocrepaldi, KeDengMS, suffiank, oliviajain, chenfucn, satyajandhyala, yuslepukhin, pranavsharma, tracysh, yufenglee, hanbitmyths, ytaous, YUNQIUGUO, zhanghuanrong, stevenlix, jywu-msft, chandru-r, duli2012, smk2007, wschin, MaajidKhan, tiagoshibata, xadupre, RandySheriffH, ashbhandare, georgen117, Tixxx, harshithapv, Craigacp, BowenBao, askhade, zhangxiang1993, gramalingam, weixingzhang, natke, tlh20, codemzs, ryanlai2, raviskolli, pranav-prakash, faxu, adtsai, fdwr, wenbingl, jcwchen, neginraoof, cschreib-ibex