Skip to content

v1.4.0

Compare
Choose a tag to compare
@mdraw mdraw released this 04 Jul 15:15
· 12 commits to master since this release
  • Accelerated onnxruntime execution providers like OpenVINO or DirectML are now
    automatically selected for inference if available. The presumably fastest
    provider is chosen by default. To override this choice you can use the new
    --execution-provider CLI argument (also fixes #40).
  • Switch included ONNX model to optimized version with batch normalization ops merged into the adjacent convolution layers. This leads to slightly better performance while keeping the end results the same (#39).
  • Update documentation according to these changes.
  • Fix a bug that occurs when passing differently shaped inputs in the same deface call (#41).

Full Changelog: v1.3.0...v1.4.0