Skip to content

Releases: dlstreamer/dlstreamer

2024.0.1

25 Apr 08:21
Compare
Choose a tag to compare

Intel® Deep Learning Streamer Pipeline Framework Release 2024.0.1

Intel® Deep Learning Streamer (Intel® DL Streamer) Pipeline Framework is a streaming media analytics framework, based on GStreamer* multimedia framework, for creating complex media analytics pipelines. It ensures pipeline interoperability and provides optimized media, and inference operations using Intel® Distribution of OpenVINO™ Toolkit Inference Engine backend, across Intel® architecture, CPU, discreate GPU, integrated GPU and NPU.

This release includes Intel® DL Streamer Pipeline Framework elements to enable video and audio analytics capabilities, (e.g., object detection, classification, audio event detection), and other elements to build end-to-end optimized pipeline in GStreamer* framework.

The complete solution leverages:

  • Open source GStreamer* framework for pipeline management
  • GStreamer* plugins for input and output such as media files and real-time streaming from camera or network
  • Video decode and encode plugins, either CPU optimized plugins or GPU-accelerated plugins based on VAAPI
  • Deep Learning models converted from training frameworks TensorFlow*, Caffe* etc. from Open Model Zoo (OMZ)
  • The following elements in the Pipeline Framework repository:
Element Description
gvadetect Performs object detection on a full-frame or region of interest (ROI) using object detection models such as YOLOv4, MobileNet SSD, Faster-RCNN etc. Outputs the ROI for detected objects.
gvaclassify Performs object classification. Accepts the ROI as an input and outputs classification results with the ROI metadata.
gvainference Runs deep learning inference on a full-frame or ROI using any model with an RGB or BGR input.
gvaaudiodetect Performs audio event detection using AclNet model.
gvatrack Performs object tracking using zero-term, or imageless tracking algorithms. Assigns unique object IDs to the tracked objects.
gvametaaggregate Aggregates inference results from multiple pipeline branches
gvametaconvert Converts the metadata structure to the JSON format.
gvametapublish Publishes the JSON metadata to MQTT or Kafka message brokers or files.
gvapython Provides a callback to execute user-defined Python functions on every frame. Can be used for metadata conversion, inference post-processing, and other tasks.
gvawatermark Overlays the metadata on the video frame to visualize the inference results.
gvafpscounter Measures frames per second across multiple streams in a single process

For the details of supported platforms, please refer to System Requirements section.

For installing Pipeline Framework with the prebuilt binaries or Docker* or to build the binaries from the open source, please refer to Intel® DL Streamer Pipeline Framework installation guide

New in this Release

Title High-level description
Add support for latest Ultralytics YOLO models Add support for latest Ultralytics YOLO models: -v7, -v8, -v9
Add support for YOLOX models Add support for YOLOX models
Support deployment of GETI-trained models Support models trained by GETI v1.8: bounding-box detection and classification (single and multi-label)
Automatic pre-/post-processing based on model descriptor Automatic pre-/post-processing based on model descriptor (model-proc file not required): yolov8, yolov9 and GETI
Docker image size reduction Reduced docker image size generated from the published docker file

Changed in this Release

Docker image replaced with Docker file

  • Ubuntu 22.04 reduced docker file is released.

Known Issues

Issue Issue Description
VAAPI memory with decodebin If you are using decodebin in conjunction with vaapi-surface-sharing preprocessing backend you should set caps filter using "video/x-raw(memory:VASurface)" after decodebin to avoid issues with pipeline initialization
Artifacts on sycl_meta_overlay Running inference results visualization on GPU via sycl_meta_overlay may produce some partially drawn bounding boxes and labels
Preview Architecture 2.0 Samples Preview Arch 2.0 samples have known issues with inference results
Memory grow with meta_overlay Some combinations of meta_overlay and encoders can lead to memory grow

Fixed issues

Issue # Issue Description Fix Affected platforms
390 How to install packages with sudo inside the docker container intel/dlstreamer:latest start the container as mentioned above with root-user (-u 0) docker run -it -u 0 --rm... and then are able to update binaries All
392 installation error dlstreamer with openvino 2023.2 2024.0 version supports API 2.0 so I highly recommend to check it and in case if this problem is still valid please raise new issue All
393 Debian file location for DL streamer 2022.3 Error no longer occurring for user All
394 Custom YoloV5m Accuracy Drop in dlstreamer with model proc Procedure to transform crowdhuman_yolov5m.pt model to the openvino version that can be used directly in DLstreamer with Yolo_v7 converter (no layer cutting required) * git clone https://github.com/ultralytics/yolov5 * cd yolov5 * pip install -r requirements.txt openvino-dev * python export.py --weights crowdhuman_yolov5m.pt --include openvino All
396 Segfault when reuse same model with same model-instance-id. 2024.0 version supports API 2.0 so I highly recommend to check it and in case if this problem is still valid please raise new issue All
404 How to generate model proc file for yolov8? Added as a feature in this release All
406 yolox support Added as a feature in this release All
409 ERROR: from element /GstPipeline:pipeline0/GstGvaDetect:gvadetect0: base_inference plugin initialization failed Suggested temporarily - to use a root-user when running the container image, like docker run -it -u 0 [... .add here your other parameters.. ...], to get more permissions All

System Requirements

Please refer to Intel® DL Streamer documentation.

Installation Notes

There are several installation options for Pipeline Framework:

  1. Install Pipeline Framework from pre-built Debian packages
  2. Build Docker image from docker file and run Docker image
  3. Build Pipeline Framework from source code

For more detailed instructions please refer to Intel® DL Streamer Pipeline Framework installation guide.

Samples

The samples folder in Intel® DL Streamer Pipeline Framework repository contains command line, C++ and Python examples.

Legal Information

No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.

Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.

This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps.

The products and services described may contain defects or errors which may cause deviations from published specifications. Current characterized errata are available on request.

Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.

*Other names and brands may be claimed as the property of others.

© 2024 Intel Corporation.

2024.0

27 Mar 13:52
Compare
Choose a tag to compare

Intel® Deep Learning Streamer Pipeline Framework Release 2024.0

Intel® Deep Learning Streamer (Intel® DL Streamer) Pipeline Framework is a streaming media analytics framework, based on GStreamer* multimedia framework, for creating complex media analytics pipelines. It ensures pipeline interoperability and provides optimized media, and inference operations using Intel® Distribution of OpenVINO™ Toolkit Inference Engine backend, across Intel® architecture, CPU, discreate GPU, integrated GPU and NPU.

This release includes Intel® DL Streamer Pipeline Framework elements to enable video and audio analytics capabilities, (e.g., object detection, classification, audio event detection), and other elements to build end-to-end optimized pipeline in GStreamer* framework.

The complete solution leverages:

  • Open source GStreamer* framework for pipeline management
  • GStreamer* plugins for input and output such as media files and real-time streaming from camera or network
  • Video decode and encode plugins, either CPU optimized plugins or GPU-accelerated plugins based on VAAPI
  • Deep Learning models converted from training frameworks TensorFlow*, Caffe* etc. from Open Model Zoo (OMZ)
  • The following elements in the Pipeline Framework repository:
Element Description
gvadetect Performs object detection on a full-frame or region of interest (ROI) using object detection models such as YOLOv4, MobileNet SSD, Faster-RCNN etc. Outputs the ROI for detected objects.
gvaclassify Performs object classification. Accepts the ROI as an input and outputs classification results with the ROI metadata.
gvainference Runs deep learning inference on a full-frame or ROI using any model with an RGB or BGR input.
gvaaudiodetect Performs audio event detection using AclNet model.
gvatrack Performs object tracking using zero-term, or imageless tracking algorithms. Assigns unique object IDs to the tracked objects.
gvametaaggregate Aggregates inference results from multiple pipeline branches
gvametaconvert Converts the metadata structure to the JSON format.
gvametapublish Publishes the JSON metadata to MQTT or Kafka message brokers or files.
gvapython Provides a callback to execute user-defined Python functions on every frame. Can be used for metadata conversion, inference post-processing, and other tasks.
gvawatermark Overlays the metadata on the video frame to visualize the inference results.
gvafpscounter Measures frames per second across multiple streams in a single process

For the details of supported platforms, please refer to System Requirements section.

For installing Pipeline Framework with the prebuilt binaries or Docker* or to build the binaries from the open source, please refer to Intel® DL Streamer Pipeline Framework installation guide

New in this Release

Title High-level description
Intel® Core™ Ultra processors NPU support Inference on NPU devices has been added, validated with Intel(R) Core(TM) Ultra 7 155H
Compatibility with OpenVINO™ Toolkit 2024.0 Pipeline Framework has been updated to use the 2024.0.0 version of the OpenVINO™ Toolkit
Compatibility with GStreamer 1.22.9 Pipeline Framework has been updated to use GStreamer framework version 1.22.9
Updated to FFmpeg 6.1.1 Updated FFmpeg from 5.1.3 to 6.1.1
Performance optimizations 8% geomean gain across tested scenarios, up to 50% performance gain in multi-stream scenarios

Changed in this Release

Docker image replaced with Docker file

  • Ubuntu 22.04 docker file is released instead of docker image.

Known Issues

Issue Issue Description
Intermittent accuracy fails with YOLOv5m and YOLOv5s Object detection pipelines using YOLOv5m and YOLOv5s show intermittent inconstancy between runs
VAAPI memory with decodebin If you are using decodebin in conjunction with vaapi-surface-sharing preprocessing backend you should set caps filter using "video/x-raw(memory:VASurface)" after decodebin to avoid issues with pipeline initialization
Artifacts on sycl_meta_overlay Running inference results visualization on GPU via sycl_meta_overlay may produce some partially drawn bounding boxes and labels
Preview Architecture 2.0 Samples Preview Arch 2.0 samples have known issues with inference results
Memory grow with meta_overlay Some combinations of meta_overlay and encoders can lead to memory grow

Fixed issues

Issue # Issue Description Fix Affected platforms
397 Installation Error DLStreamer - Both Debian Packages and Compile from Sources Package dependencies have been updated. All
399 Compilation err when building DLstreamer 2023-release with OpenVINO 2023.2.0 DLStreamer no longer uses legacy OpenVINO™ APIs. All

System Requirements

Please refer to Intel® DL Streamer documentation.

Installation Notes

There are several installation options for Pipeline Framework:

  1. Install Pipeline Framework from pre-built Debian packages
  2. Build Docker image from docker file and run Docker image
  3. Build Pipeline Framework from source code

For more detailed instructions please refer to Intel® DL Streamer Pipeline Framework installation guide.

Samples

The samples folder in Intel® DL Streamer Pipeline Framework repository contains command line, C++ and Python examples.

Legal Information

No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.

Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.

This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps.

The products and services described may contain defects or errors which may cause deviations from published specifications. Current characterized errata are available on request.

Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.

*Other names and brands may be claimed as the property of others.

© 2024 Intel Corporation.

Release 2023.0

02 Oct 18:24
Compare
Choose a tag to compare

Intel® Deep Learning Streamer Pipeline Framework Release 2023.0

Intel® Deep Learning Streamer (Intel® DL Streamer) Pipeline Framework is a streaming media analytics framework, based on GStreamer* multimedia framework, for creating complex media analytics pipelines. It ensures pipeline interoperability and provides optimized media, and inference operations using Intel® Distribution of OpenVINO™ Toolkit Inference Engine backend, across Intel® architecture, CPU, and iGPU.

This release includes Intel® DL Streamer Pipeline Framework elements to enable video and audio analytics capabilities, (e.g., object detection, classification, audio event detection), and other elements to build end-to-end optimized pipeline in GStreamer* framework.

The complete solution leverages:

  • Open source GStreamer* framework for pipeline management
  • GStreamer* plugins for input and output such as media files and real-time streaming from camera or network
  • Video decode and encode plugins, either CPU optimized plugins or GPU-accelerated plugins based on VAAPI
  • Deep Learning models converted from training frameworks TensorFlow*, Caffe* etc. from Open Model Zoo (OMZ)
  • The following elements in the Pipeline Framework repository:
Element Description
gvadetect Performs object detection on a full-frame or region of interest (ROI) using object detection models such as YOLOv4, MobileNet SSD, Faster-RCNN etc. Outputs the ROI for detected objects.
gvaclassify Performs object classification. Accepts the ROI as an input and outputs classification results with the ROI metadata.
gvainference Runs deep learning inference on a full-frame or ROI using any model with an RGB or BGR input.
gvaaudiodetect Performs audio event detection using AclNet model.
gvatrack Performs object tracking using zero-term, or imageless tracking algorithms. Assigns unique object IDs to the tracked objects.
gvametaaggregate Aggregates inference results from multiple pipeline branches
gvametaconvert Converts the metadata structure to the JSON format.
gvametapublish Publishes the JSON metadata to MQTT or Kafka message brokers or files.
gvapython Provides a callback to execute user-defined Python functions on every frame. Can be used for metadata conversion, inference post-processing, and other tasks.
gvawatermark Overlays the metadata on the video frame to visualize the inference results.
gvafpscounter Measures frames per second across multiple streams in a single process

For the details of supported platforms, please refer to System Requirements section.

For installing Pipeline Framework with the prebuilt binaries or Docker* or to build the binaries from the open source, please refer to Intel® DL Streamer Pipeline Framework installation guide

New in this Release

Title High-level description
Compatibility with OpenVINO™ Toolkit 2023.0 Pipeline Framework has been updated to use the 2023.0.0 version of the OpenVINO™ Toolkit
Intel® Data Center GPU Flex Series PV support Validated on Intel® Data Center GPU Flex Series 140 and 170 with pipelines/models/videos from the Intel® DL Streamer Pipeline Zoo, Pipeline Zoo Models and Pipeline Zoo Media repositories. Tested with the Latest GPU Linux release (https://dgpu-docs.intel.com/releases/production_682.14_20230804.html)
Updated to FFmpeg 5.1.3 Updated FFmpeg from 5.1 to 5.1.3
New media analytics model support Added support for DeepSort and object tracking

Changed in this Release

Deprecation Notices

  • Ubuntu 20.04 is no longer actively supported.
  • See see full list of currently deprecated properties in this table
  • YOLOv2 is no longer a supported model

Known Issues

Issue Issue Description
Intermittent accuracy fails with YOLOv5m and YOLOv5s Object detection pipelines using YOLOv5m and YOLOv5s show intermittent inconstancy between runs
VAAPI memory with decodebin If you are using decodebin in conjunction with vaapi-surface-sharing preprocessing backend you should set caps filter using "video/x-raw(memory:VASurface)" after decodebin to avoid issues with pipeline initialization
Artifacts on sycl_meta_overlay Running inference results visualization on GPU via sycl_meta_overlay may produce some partially drawn bounding boxes and labels
Preview Architecture 2.0 Samples Preview Arch 2.0 samples have known issues with inference results
Memory grow with meta_overlay Some combinations of meta_overlay and encoders can lead to memory grow

Fixed issues

Issue # Issue Description Fix Affected platforms
336 Regarding the length and width of rectangular training yolov5, specify them separately in dlstreamer Fixed layouts handling in YOLO post processing. All

System Requirements

Please refer to Intel® DL Streamer documentation.

Installation Notes

There are several installation options for Pipeline Framework:

  1. Install Pipeline Framework from pre-built Debian packages
  2. Pull and run Docker image
  3. Build Pipeline Framework from source code

For more detailed instructions please refer to Intel® DL Streamer Pipeline Framework installation guide.

Samples

The samples folder in Intel® DL Streamer Pipeline Framework repository contains command line, C++ and Python examples.

Legal Information

No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.

Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.

This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps.

The products and services described may contain defects or errors which may cause deviations from published specifications. Current characterized errata are available on request.

Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.

*Other names and brands may be claimed as the property of others.

© 2023 Intel Corporation.

Release 2022.3

03 Mar 21:51
Compare
Choose a tag to compare

Intel® Deep Learning Streamer Pipeline Framework Release 2022.3

Intel® Deep Learning Streamer (Intel® DL Streamer) Pipeline Framework is a streaming media analytics framework, based on GStreamer* multimedia framework, for creating complex media analytics pipelines. It ensures pipeline interoperability and provides optimized media, and inference operations using Intel® Distribution of OpenVINO™ Toolkit Inference Engine backend, across Intel® architecture, CPU, and iGPU.

This release includes Intel® DL Streamer Pipeline Framework elements to enable video and audio analytics capabilities, (e.g., object detection, classification, audio event detection), and other elements to build end-to-end optimized pipeline in GStreamer* framework.

The complete solution leverages:

  • Open source GStreamer* framework for pipeline management
  • GStreamer* plugins for input and output such as media files and real-time streaming from camera or network
  • Video decode and encode plugins, either CPU optimized plugins or GPU-accelerated plugins based on VAAPI
  • Deep Learning models converted from training frameworks TensorFlow*, Caffe* etc. from Open Model Zoo (OMZ)
  • The following elements in the Pipeline Framework repository:
Element Description
gvadetect Performs object detection on a full-frame or region of interest (ROI) using object detection models such as YOLOv4, MobileNet SSD, Faster-RCNN etc. Outputs the ROI for detected objects.
gvaclassify Performs object classification. Accepts the ROI as an input and outputs classification results with the ROI metadata.
gvainference Runs deep learning inference on a full-frame or ROI using any model with an RGB or BGR input.
gvaaudiodetect Performs audio event detection using AclNet model.
gvatrack Performs object tracking using zero-term, or imageless tracking algorithms. Assigns unique object IDs to the tracked objects.
gvametaaggregate Aggregates inference results from multiple pipeline branches
gvametaconvert Converts the metadata structure to the JSON format.
gvametapublish Publishes the JSON metadata to MQTT or Kafka message brokers or files.
gvapython Provides a callback to execute user-defined Python functions on every frame. Can be used for metadata conversion, inference post-processing, and other tasks.
gvawatermark Overlays the metadata on the video frame to visualize the inference results.
gvafpscounter Measures frames per second across multiple streams in a single process

For the details of supported platforms, please refer to System Requirements section.

For installing Pipeline Framework with the prebuilt binaries or Docker* or to build the binaries from the open source, please refer to Intel® DL Streamer Pipeline Framework installation guide

New in this Release

Title High-level description
Intel® Data Center GPU Flex Series PV support Validated on Intel® Data Center GPU Flex Series 140 and 170 with pipelines/models/videos from the Intel® DL Streamer Pipeline Zoo, Pipeline Zoo Models and Pipeline Zoo Media repositories
Full Ubuntu 22.04 Support Intel® DL Streamer has moved primary support to the current Ubuntu 22.04 LTS release. Ubuntu 20.04 is still a supported OS but Docker Images and APT packages are based on 22.04
Compatibility with OpenVINO™ Toolkit 2022.3 Pipeline Framework has been updated to use the 2022.3.0 version of the OpenVINO™ Toolkit
Updated to FFmpeg 5.1 Updated FFmpeg from 4.4 to 5.1

Changed in this Release

Deprecation Notices

  • Ubuntu 20.04 is still supported but primary support has moved to the latest Ubuntu 22.04 LTS version
  • See see full list of currently deprecated properties in this table
  • YOLOv2 is no longer a supported model

Known Issues

Issue Issue Description
Object Tracking If generate-objects is set to true as it can produce misaligned or extra object tracking bounding boxes
VAAPI memory with decodebin If you are using decodebin in conjunction with vaapi-surface-sharing preprocessing backend you should set caps filter using "video/x-raw(memory:VASurface)" after decodebin to avoid issues with pipeline initialization
Artifacts on sycl_meta_overlay Running inference results visualization on GPU via sycl_meta_overlay may produce some partially drawn bounding boxes and labels
Exception Failed to construct OpenVINOImageInference In case of error similar to this: basic_string_view::substr: __pos (which is 18446744073709551603) > __size (which is 47), please share versions of installed Debian packages in link above, or reinstall OS and follow install guide
Draw_face_attributes sample This sample errors and reports inference request failed
Action recognition sample Sample returns with no results. Changing object_classify to video_inference in samples/gstreamer/gst_launch/action_recognition/action_recognition.sh

Fixed issues

Issue # Issue Description Fix Affected platforms
325 GStreamer benchmark sample gave "Permission denied" Undefined variables (PROCESSES_COUNT and CHANNELS_PER_PROCESS) were used in benchmark_one_model.sh and benchmark_two_models.sh. All

System Requirements

Please refer to Intel® DL Streamer documentation.

Installation Notes

There are several installation options for Pipeline Framework:

  1. Install Pipeline Framework from APT repository
  2. Pull and run Docker image
  3. Build Pipeline Framework from source code

For more detailed instructions please refer to Intel® DL Streamer Pipeline Framework installation guide.

Samples

The samples folder in Intel® DL Streamer Pipeline Framework repository contains command line, C++ and Python examples.

Legal Information

No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.

Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.

This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps.

The products and services described may contain defects or errors which may cause deviations from published specifications. Current characterized errata are available on request.

Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.

*Other names and brands may be claimed as the property of others.

© 2023 Intel Corporation.

Release 2022.2

07 Oct 00:55
Compare
Choose a tag to compare

Intel® Deep Learning Streamer Pipeline Framework 2022.2

Intel® Deep Learning Streamer (Intel® DL Streamer) Pipeline Framework is a streaming media analytics framework, based on GStreamer* multimedia framework, for creating complex media analytics pipelines. It ensures pipeline interoperability and provides optimized media, and inference operations using Intel® Distribution of OpenVINO™ Toolkit Inference Engine backend, across Intel® architecture, CPU, and iGPU.

This release includes Intel® DL Streamer Pipeline Framework elements to enable video and audio analytics capabilities, (e.g., object detection, classification, audio event detection), and other elements to build end-to-end optimized pipeline in GStreamer* framework.

The complete solution leverages:

  • Open source GStreamer* framework for pipeline management
  • GStreamer* plugins for input and output such as media files and real-time streaming from camera or network
  • Video decode and encode plugins, either CPU optimized plugins or GPU-accelerated plugins based on VAAPI
  • Deep Learning models converted from training frameworks TensorFlow*, Caffe* etc. from Open Model Zoo (OMZ)
  • The following elements in the Pipeline Framework repository:
Element Description
gvadetect Performs object detection on a full-frame or region of interest (ROI) using object detection models such as YOLO v3-v5, MobileNet SSD, Faster-RCNN etc. Outputs the ROI for detected objects.
gvaclassify Performs object classification. Accepts the ROI as an input and outputs classification results with the ROI metadata.
gvainference Runs deep learning inference on a full-frame or ROI using any model with an RGB or BGR input.
gvaaudiodetect Performs audio event detection using AclNet model.
gvatrack Performs object tracking using zero-term, or imageless tracking algorithms. Assigns unique object IDs to the tracked objects.
gvametaaggregate Aggregates inference results from multiple pipeline branches
gvametaconvert Converts the metadata structure to the JSON format.
gvametapublish Publishes the JSON metadata to MQTT or Kafka message brokers or files.
gvapython Provides a callback to execute user-defined Python functions on every frame. Can be used for metadata conversion, inference post-processing, and other tasks.
gvawatermark Overlays the metadata on the video frame to visualize the inference results.
gvafpscounter Measures frames per second across multiple streams in a single process

For the details of supported platforms, please refer to System Requirements section.

For installing Pipeline Framework with the prebuilt binaries or Docker* or to build the binaries from the open source, please refer to Intel® DL Streamer Pipeline Framework installation guide

New in this Release

Title High-level description
Intel® Data Center GPU Flex Series Beta support Validated on Intel® Data Center GPU Flex Series 140 and 170 with pipelines/models/videos from the Intel® DL Streamer Pipeline Zoo, Pipeline Zoo Models and Pipeline Zoo Media repositories
Updated to GStreamer 1.20.3 Upgrades from GStreamer 1.18.4 to latest stable GStreamer 1.20.3.
YOLOv5 Support Added YOLOv5 postprocessing support
Architecture 2.0 [Preview] Includes memory interop header-only library for zero-copy buffer sharing on CPU and GPU, C++ elements, integration into GStreamer as three sub-components
New non-GStreamer samples 2.0 Samples FFmpeg+OpenVINO and FFmpeg+DPCPP
New GStreamer samples 2.0 on bin-elements and sub-pipelines face_detection_and_classification_cpu, action_recognition, instance_segmentation, roi_background_removal, classification_with_background_removal
New element object_track supporting object tracking on GPU device=GPU in gvatrack device=GPU discontinued, instead use vaapipostproc ! object_track spatial-feature=sliced-histogram device=GPU ! vaapipostproc
New element sycl_meta_overlay supporting inference results visualization on GPU device=GPU in gvawatermark device=GPU discontinued, instead use vaapipostproc ! opencv_meta_overlay attach-label-mask=true ! sycl_meta_overlay ! vaapipostproc
New property labels-file in gvainference/gvadetect/gvaclassify elements Allows to pass labels as .txt file
New property scale-method in gvainference/gvadetect/gvaclassify elements Allows to select scale method used in pre-processing
Compatibility with OpenVINO™ Toolkit 2022.2 Pipeline Framework has been updated to use the 2022.2.0 version of the OpenVINO™ Toolkit

Changed in this Release

Deprecation Notices

  • Deprecated device=GPU in gvatrack and gvawatermark
  • Please see full list of currently deprecated properties in this table

Known Issues

Issue Issue Description
Artifacts on sycl_meta_overlay Running inference results visualization on GPU via sycl_meta_overlay may produce some partially drawn bounding boxes and labels
Exception Failed to construct OpenVINOImageInference In case of error similar to this: basic_string_view::substr: __pos (which is 18446744073709551603) > __size (which is 47), please share versions of installed Debian packages in link above, or reinstall OS and follow install guide

Fixed issues

Issue # Issue Description Workaround Affected platforms
Backwards compatibility issues when using gvapython and GST 1.18. Added support to preserve legacy compatibility with gvapython element. All

System Requirements

Please refer to Intel® DL Streamer documentation.

Installation Notes

There are several installation options for Pipeline Framework:

  1. Build Pipeline Framework from source code

For more detailed instructions please refer to Intel® DL Streamer Pipeline Framework installation guide.

Samples

The samples folder in Intel® DL Streamer Pipeline Framework repository contains command line, C++ and Python examples.

Legal Information

No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.

Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.

This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps.

The products and services described may contain defects or errors which may cause deviations from published specifications. Current characterized errata are available on request.

Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.

*Other names and brands may be claimed as the property of others.

© 2022 Intel Corporation.

2022.2 Pre-release 2

23 Sep 23:31
Compare
Choose a tag to compare
2022.2 Pre-release 2 Pre-release
Pre-release

Intel® Deep Learning Streamer Pipeline Framework 2022.2 Pre-release 2

Intel® Deep Learning Streamer (Intel® DL Streamer) Pipeline Framework is a streaming media analytics framework, based on GStreamer* multimedia framework, for creating complex media analytics pipelines. It ensures pipeline interoperability and provides optimized media, and inference operations using Intel® Distribution of OpenVINO™ Toolkit Inference Engine backend, across Intel® architecture, CPU, and iGPU.

This release includes Intel® DL Streamer Pipeline Framework elements to enable video and audio analytics capabilities, (e.g., object detection, classification, audio event detection), and other elements to build end-to-end optimized pipeline in GStreamer* framework.

The complete solution leverages:

  • Open source GStreamer* framework for pipeline management
  • GStreamer* plugins for input and output such as media files and real-time streaming from camera or network
  • Video decode and encode plugins, either CPU optimized plugins or GPU-accelerated plugins based on VAAPI
  • Deep Learning models converted from training frameworks TensorFlow*, Caffe* etc. from Open Model Zoo (OMZ)
  • The following elements in the Pipeline Framework repository:
Element Description
gvadetect Performs object detection on a full-frame or region of interest (ROI) using object detection models such as YOLOv4, MobileNet SSD, Faster-RCNN etc. Outputs the ROI for detected objects.
gvaclassify Performs object classification. Accepts the ROI as an input and outputs classification results with the ROI metadata.
gvainference Runs deep learning inference on a full-frame or ROI using any model with an RGB or BGR input.
gvaaudiodetect Performs audio event detection using AclNet model.
gvatrack Performs object tracking using zero-term, or imageless tracking algorithms. Assigns unique object IDs to the tracked objects.
gvametaaggregate Aggregates inference results from multiple pipeline branches
gvametaconvert Converts the metadata structure to the JSON format.
gvametapublish Publishes the JSON metadata to MQTT or Kafka message brokers or files.
gvapython Provides a callback to execute user-defined Python functions on every frame. Can be used for metadata conversion, inference post-processing, and other tasks.
gvawatermark Overlays the metadata on the video frame to visualize the inference results.
gvafpscounter Measures frames per second across multiple streams in a single process
gvaactionrecognitionbin Performs full-frame action recognition inference using action-recognition-0001’s/driver-action-recognition-adas-0002’s encoder and decoder models.

For the details of supported platforms, please refer to System Requirements section.

For installing Pipeline Framework with the prebuilt binaries or Docker* or to build the binaries from the open source, please refer to Intel® DL Streamer Pipeline Framework installation guide

New in this Release

Title High-level description
Intel® Data Center GPU Flex Series Beta support Validated on Intel® Data Center GPU Flex Series 140 and 170 with pipelines/models/videos from the Intel® DL Streamer Pipeline Zoo, Pipeline Zoo Models and Pipeline Zoo Media repositories
New element object_track supporting object tracking on GPU device=GPU in gvatrack device=GPU discontinued, instead use vaapipostproc ! object_track spatial-feature=sliced-histogram device=GPU ! vaapipostproc
New element watermark_sycl supporting inference results visualization on GPU device=GPU in gvawatermark device=GPU discontinued, instead use vaapipostproc ! watermark_opencv attach-label-mask=true ! watermark_sycl ! vaapipostproc
New property labels-file in gvainference/gvadetect/gvaclassify elements Allows to pass labels as .txt file
New property scale-method in gvainference/gvadetect/gvaclassify elements Allows to select scale method used in pre-processing
Compatibility with OpenVINO™ Toolkit 2022.2 Pre-release Pipeline Framework has been updated to use the 2022.2.0.dev20220829 version of the OpenVINO™ Toolkit
YOLOv5 Support Added YOLOv5 postprocessing support

Changed in this Release

Deprecation Notices

  • Deprecated device=GPU in gvatrack and gvawatermark

Known Issues

Issue Issue Description
DMABuf memory not working Media driver fails processing DMABuf memory. Workaround is to use VASurface memory, replace GStreamer caps video/x-raw(memory:DMABuf) with video/x-raw(memory:VASurface)
Artifacts on watermark_sycl Running inference results visualization on GPU via watermark_sycl may produce some partially drawn bounding boxes
gvawatermark fails if DPC++ environment initialized Workaround is to add property device=CPU gvawatermark device=CPU or not initialize DPC++ environment if using gvawatermark for visualization of inference results on CPU

Fixed issues

Issue # Issue Description Workaround Affected platforms
Backwards compatibility issues when using gvapython and GST 1.18. Added support to preserve legacy compatibility with gvapython element. All

System Requirements

Please refer to Intel® DL Streamer documentation.

Installation Notes

There are several installation options for Pipeline Framework:

  1. Build Pipeline Framework from source code

For more detailed instructions please refer to Intel® DL Streamer Pipeline Framework installation guide.

Samples

The samples folder in Intel® DL Streamer Pipeline Framework repository contains command line, C++ and Python examples.

Legal Information

No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.

Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.

This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps.

The products and services described may contain defects or errors which may cause deviations from published specifications. Current characterized errata are available on request.

Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.

*Other names and brands may be claimed as the property of others.

© 2022 Intel Corporation.

2022.2 Pre-release

22 Sep 02:17
Compare
Choose a tag to compare
2022.2 Pre-release Pre-release
Pre-release

Intel® Deep Learning Streamer Pipeline Framework 2022.2 Pre-release

Intel® Deep Learning Streamer (Intel® DL Streamer) Pipeline Framework is a streaming media analytics framework, based on GStreamer* multimedia framework, for creating complex media analytics pipelines. It ensures pipeline interoperability and provides optimized media, and inference operations using Intel® Distribution of OpenVINO™ Toolkit Inference Engine backend, across Intel® architecture, CPU, and iGPU.

This release includes Intel® DL Streamer Pipeline Framework elements to enable video and audio analytics capabilities, (e.g., object detection, classification, audio event detection), and other elements to build end-to-end optimized pipeline in GStreamer* framework.

The complete solution leverages:

  • Open source GStreamer* framework for pipeline management
  • GStreamer* plugins for input and output such as media files and real-time streaming from camera or network
  • Video decode and encode plugins, either CPU optimized plugins or GPU-accelerated plugins based on VAAPI
  • Deep Learning models converted from training frameworks TensorFlow*, Caffe* etc. from Open Model Zoo (OMZ)
  • The following elements in the Pipeline Framework repository:
Element Description
gvadetect Performs object detection on a full-frame or region of interest (ROI) using object detection models such as YOLOv4, MobileNet SSD, Faster-RCNN etc. Outputs the ROI for detected objects.
gvaclassify Performs object classification. Accepts the ROI as an input and outputs classification results with the ROI metadata.
gvainference Runs deep learning inference on a full-frame or ROI using any model with an RGB or BGR input.
gvaaudiodetect Performs audio event detection using AclNet model.
gvatrack Performs object tracking using zero-term, or imageless tracking algorithms. Assigns unique object IDs to the tracked objects.
gvametaaggregate Aggregates inference results from multiple pipeline branches
gvametaconvert Converts the metadata structure to the JSON format.
gvametapublish Publishes the JSON metadata to MQTT or Kafka message brokers or files.
gvapython Provides a callback to execute user-defined Python functions on every frame. Can be used for metadata conversion, inference post-processing, and other tasks.
gvawatermark Overlays the metadata on the video frame to visualize the inference results.
gvafpscounter Measures frames per second across multiple streams in a single process
gvaactionrecognitionbin Performs full-frame action recognition inference using action-recognition-0001’s/driver-action-recognition-adas-0002’s encoder and decoder models.

For the details of supported platforms, please refer to System Requirements section.

For installing Pipeline Framework with the prebuilt binaries or Docker* or to build the binaries from the open source, please refer to Intel® DL Streamer Pipeline Framework installation guide

New in this Release

Title High-level description
Intel® Data Center GPU Flex Series Beta support Validated on Intel® Data Center GPU Flex Series 140 and 170 with pipelines/models/videos from the Intel® DL Streamer Pipeline Zoo, Pipeline Zoo Models and Pipeline Zoo Media repositories
New element object_track supporting object tracking on GPU device=GPU in gvatrack device=GPU discontinued, instead use vaapipostproc ! object_track spatial-feature=sliced-histogram device=GPU ! vaapipostproc
New element watermark_sycl supporting inference results visualization on GPU device=GPU in gvawatermark device=GPU discontinued, instead use vaapipostproc ! watermark_opencv attach-label-mask=true ! watermark_sycl ! vaapipostproc
New property labels-file in gvainference/gvadetect/gvaclassify elements Allows to pass labels as .txt file
New property scale-method in gvainference/gvadetect/gvaclassify elements Allows to select scale method used in pre-processing
Compatibility with OpenVINO™ Toolkit 2022.2 Pre-release Pipeline Framework has been updated to use the 2022.2.0.dev20220829 version of the OpenVINO™ Toolkit
YOLOv5 Support Added YOLOv5 postprocessing support

Changed in this Release

Deprecation Notices

  • Deprecated device=GPU in gvatrack and gvawatermark

Known Issues

Issue Issue Description
DMABuf memory not working Media driver fails processing DMABuf memory. Workaround is to use VASurface memory, replace GStreamer caps video/x-raw(memory:DMABuf) with video/x-raw(memory:VASurface)
Artifacts on watermark_sycl Running inference results visualization on GPU via watermark_sycl may produce some partially drawn bounding boxes
gvawatermark fails if DPC++ environment initialized Workaround is to add property device=CPU gvawatermark device=CPU or not initialize DPC++ environment if using gvawatermark for visualization of inference results on CPU

System Requirements

Please refer to Intel® DL Streamer documentation.

Installation Notes

There are several installation options for Pipeline Framework:

  1. Build Pipeline Framework from source code

For more detailed instructions please refer to Intel® DL Streamer Pipeline Framework installation guide.

Samples

The samples folder in Intel® DL Streamer Pipeline Framework repository contains command line, C++ and Python examples.

Legal Information

No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.

Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.

This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps.

The products and services described may contain defects or errors which may cause deviations from published specifications. Current characterized errata are available on request.

Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.

*Other names and brands may be claimed as the property of others.

© 2022 Intel Corporation.

Release 2022.1

26 May 20:15
Compare
Choose a tag to compare

Intel® Deep Learning Streamer (Intel® DL Streamer) Pipeline Framework is a streaming media analytics framework, based on GStreamer* multimedia framework, for creating complex media analytics pipelines. It ensures pipeline interoperability and provides optimized media, and inference operations using Intel® Distribution of OpenVINO™ Toolkit Inference Engine backend, across Intel® architecture, CPU, and iGPU.

This release includes Intel® DL Streamer Pipeline Framework elements to enable video and audio analytics capabilities, (e.g., object detection, classification, audio event detection), and other elements to build end-to-end optimized pipeline in GStreamer* framework.

The complete solution leverages:

  • Open source GStreamer* framework for pipeline management
  • GStreamer* plugins for input and output such as media files and real-time streaming from camera or network
  • Video decode and encode plugins, either CPU optimized plugins or GPU-accelerated plugins based on VAAPI
  • Deep Learning models converted from training frameworks TensorFlow*, Caffe* etc. from Open Model Zoo (OMZ)
  • The following elements in the Pipeline Framework repository:
Element Description
gvadetect Performs object detection on a full-frame or region of interest (ROI) using object detection models such as YOLOv4, MobileNet SSD, Faster-RCNN etc. Outputs the ROI for detected objects.
gvaclassify Performs object classification. Accepts the ROI as an input and outputs classification results with the ROI metadata.
gvainference Runs deep learning inference on a full-frame or ROI using any model with an RGB or BGR input.
gvaaudiodetect Performs audio event detection using AclNet model.
gvatrack Performs object tracking using zero-term, or imageless tracking algorithms. Assigns unique object IDs to the tracked objects.
gvametaaggregate Aggregates inference results from multiple pipeline branches
gvametaconvert Converts the metadata structure to the JSON format.
gvametapublish Publishes the JSON metadata to MQTT or Kafka message brokers or files.
gvapython Provides a callback to execute user-defined Python functions on every frame. Can be used for metadata conversion, inference post-processing, and other tasks.
gvawatermark Overlays the metadata on the video frame to visualize the inference results.
gvafpscounter Measures frames per second across multiple streams in a single process
gvaactionrecognitionbin Performs full-frame action recognition inference using action-recognition-0001’s/driver-action-recognition-adas-0002’s encoder and decoder models.

For the details of supported platforms, please refer to System Requirements section.

For installing Pipeline Framework with the prebuilt binaries or Docker* or to build the binaries from the open source, please refer to Intel® DL Streamer Pipeline Framework installation guide

New in this Release

Title High-level description
Introducing the Intel® Deep Learning Streamer Intel® DL Streamer Pipeline Framework (formerly DL Streamer distribution of OpenVINO™ Toolkit) has separated from the OpenVINO™ toolkit and is the core component of the Intel® DL Streamer. Other components of Intel® DL Streamer include Intel® DL Streamer Pipeline Server, and Intel® DL Streamer Pipeline Zoo
New distribution mechanisms Install via APT package manager or pull docker images
Compatibility with OpenVINO™ Toolkit 2022.1 Pipeline Framework has been updated to use the 2022.1 version of the OpenVINO™ Toolkit
Web portal Your one stop for all things Intel® DL Streamer https://dlstreamer.github.io
New property 'labels' in gvadetect and gvaclassify elements Pass labels as .txt file, as alternative method to 'labels' array in model-proc file
Benchmark sample supports multi-process execution Frames per second measurements accumulated across multiple processes
Improved 'vaapi' and 'vaapi-sharing' pre-processing Support aspect-ratio and padding in VAAPI-based preprocessing
Autovideosink All samples now use autovideosink instead of ximagesink. Please run 'source samples/force_ximagesink.sh' to force autovideosink to use ximagesink, ex. if run samples under remote X connection
More model-proc files Model-proc files expanded to cover more models from OpenVINO™ Open Model Zoo

Changed in this Release

Deprecation Notices

  • Object tracking type 'short-term' deprecated and removed in gvatrack element. Either replace 'tracking-type=short-term' with 'tracking-type=zero-term' and remove 'inference-interval=N' property in gvadetect/gvaclassify, or replace 'tracking-type=short-term' with 'tracking-type=short-term-imageless'
  • Support for the Ubuntu 18.04 Operating System has been deprecated. Compilation on host system from support is provided "as is", but will require installing media driver, ex. from here. This usage will be removed in a future release.
  • The following properties are deprecated in gvainference/gvadetect/gvaclassify elements and could be removed in future releases:
    • cpu-throughput-streams
    • gpu-throughput-streams
    • no-block
    • pre-process-config
    • device-extensions
    • reshape
    • reshape-width
    • reshape-height

Known Issues

Issue # Issue Description Workaround Affected platforms
163 Low quality of inference in case of using vaapi-surface-sharing pre-proc on system with highly loaded CPU. On system with highly loaded CPU, using of vaapi-surface-sharing preproc can lead to skipping or duplicating of inference results for adjacent frames. Use vaapi pre-proc instead of vaapi-surface-sharing. It can lead to some performance. iGPU
195 vaapi-surface-sharing is outperformed by vaapi preprocess-backend on iGPU. On system with iGPU using of vaapisurface-sharing pre-proc shows worse performance than vaapi preproc Use vaapi pre-proc instead of vaapi-surface-sharing. iGPU
GPU Watermark doesn't work on iGPU in 12th Gen Intel® Core Processors Use watermark on CPU iGPU (12th Gen Intel® Core)
Missing frames in inference results when using batching with zero-copy on iGPU in 11th Gen Intel® Core Processors Batching with pre-process-backend=vaapi iGPU (11th Gen Intel® Core)
Multi-stream pipeline hangs in batch-size and vaapi-sharing case iGPU

Fixed issues

Issue # Issue Description Workaround Affected platforms
249 Preprocessing backend vaapi-surface-sharing not working on 12th Gen Intel® Core Processors. OpenCL and VAAPI drivers fail to correctly share NV12 image buffers on this platform. Use vaapi pre-proc instead of vaapi-surface-sharing. iGPU

System Requirements

Please refer to Intel® DL Streamer documentation.

Installation Notes

There are several installation options for Pipeline Framework:

  1. Install APT packages (Ubuntu 20.04). Two sub-options
  2. Dockerhub Image
  3. Build Pipeline Framework from source code
  4. Build Pipeline Framework Docker image. Two sub-options
    • Build Docker image using APT packages
    • Build Docker image from source code

For more detailed instructions please refer to Intel® DL Streamer Pipeline Framework installation guide.

Samples

The samples folder in Intel® DL Streamer Pipeline Framework repository contains command line, C++ and Python examples.

Legal Information

No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.

Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a partic...

Read more

Release 2021.4.2

15 Dec 12:23
88fc68c
Compare
Choose a tag to compare

What’s New in This Release:

Title High-level description
Support for 12th Gen Intel® Core™ processor (formerly Alder Lake) Limitation: vaapi-surface-sharing does not work on Alder Lake. Please use vaapi pre-proc (default method for GPU memory) instead of vaapi-surface-sharing.

Release 2021.4.1

10 Sep 06:46
747878f
Compare
Choose a tag to compare

What’s New in This Release:

Title High-level description
Preview of 12th Gen Intel® Core™ processor (formerly Alder Lake) support Pipelines with GPU accelerated decode and pre-preprocessing on Alder Lake platforms require post-install step to update media driver to latest version, see for details Install-on-Alder-Lake.  Limitation: vaapi-surface-sharing does not work on ADL. Please use vaapi pre-proc instead of vaapi-surface-sharing.
Support more models Out-of-box support for more Open Model Zoo models (person-vehicle-bike-detection-2004 and others with similar output layer), public models EfficientNet and EfficientDet​
Model-proc files documentation Added documentation how to create pre/post-processing configuration files for custom models. Wiki contains ​documentation about pre-/post-processing using model_proc files​ and tutorial how to create model_proc file for the custom model​
Operating system deprecation notice: DL Streamer will drop support for CentOS and will introduce support for Red Hat Enterprise Linux (RHEL) 8 starting release 2022.1