Skip to content

2024.0.1

Latest
Compare
Choose a tag to compare
@tbujewsk tbujewsk released this 25 Apr 08:21

Intel® Deep Learning Streamer Pipeline Framework Release 2024.0.1

Intel® Deep Learning Streamer (Intel® DL Streamer) Pipeline Framework is a streaming media analytics framework, based on GStreamer* multimedia framework, for creating complex media analytics pipelines. It ensures pipeline interoperability and provides optimized media, and inference operations using Intel® Distribution of OpenVINO™ Toolkit Inference Engine backend, across Intel® architecture, CPU, discreate GPU, integrated GPU and NPU.

This release includes Intel® DL Streamer Pipeline Framework elements to enable video and audio analytics capabilities, (e.g., object detection, classification, audio event detection), and other elements to build end-to-end optimized pipeline in GStreamer* framework.

The complete solution leverages:

  • Open source GStreamer* framework for pipeline management
  • GStreamer* plugins for input and output such as media files and real-time streaming from camera or network
  • Video decode and encode plugins, either CPU optimized plugins or GPU-accelerated plugins based on VAAPI
  • Deep Learning models converted from training frameworks TensorFlow*, Caffe* etc. from Open Model Zoo (OMZ)
  • The following elements in the Pipeline Framework repository:
Element Description
gvadetect Performs object detection on a full-frame or region of interest (ROI) using object detection models such as YOLOv4, MobileNet SSD, Faster-RCNN etc. Outputs the ROI for detected objects.
gvaclassify Performs object classification. Accepts the ROI as an input and outputs classification results with the ROI metadata.
gvainference Runs deep learning inference on a full-frame or ROI using any model with an RGB or BGR input.
gvaaudiodetect Performs audio event detection using AclNet model.
gvatrack Performs object tracking using zero-term, or imageless tracking algorithms. Assigns unique object IDs to the tracked objects.
gvametaaggregate Aggregates inference results from multiple pipeline branches
gvametaconvert Converts the metadata structure to the JSON format.
gvametapublish Publishes the JSON metadata to MQTT or Kafka message brokers or files.
gvapython Provides a callback to execute user-defined Python functions on every frame. Can be used for metadata conversion, inference post-processing, and other tasks.
gvawatermark Overlays the metadata on the video frame to visualize the inference results.
gvafpscounter Measures frames per second across multiple streams in a single process

For the details of supported platforms, please refer to System Requirements section.

For installing Pipeline Framework with the prebuilt binaries or Docker* or to build the binaries from the open source, please refer to Intel® DL Streamer Pipeline Framework installation guide

New in this Release

Title High-level description
Add support for latest Ultralytics YOLO models Add support for latest Ultralytics YOLO models: -v7, -v8, -v9
Add support for YOLOX models Add support for YOLOX models
Support deployment of GETI-trained models Support models trained by GETI v1.8: bounding-box detection and classification (single and multi-label)
Automatic pre-/post-processing based on model descriptor Automatic pre-/post-processing based on model descriptor (model-proc file not required): yolov8, yolov9 and GETI
Docker image size reduction Reduced docker image size generated from the published docker file

Changed in this Release

Docker image replaced with Docker file

  • Ubuntu 22.04 reduced docker file is released.

Known Issues

Issue Issue Description
VAAPI memory with decodebin If you are using decodebin in conjunction with vaapi-surface-sharing preprocessing backend you should set caps filter using "video/x-raw(memory:VASurface)" after decodebin to avoid issues with pipeline initialization
Artifacts on sycl_meta_overlay Running inference results visualization on GPU via sycl_meta_overlay may produce some partially drawn bounding boxes and labels
Preview Architecture 2.0 Samples Preview Arch 2.0 samples have known issues with inference results
Memory grow with meta_overlay Some combinations of meta_overlay and encoders can lead to memory grow

Fixed issues

Issue # Issue Description Fix Affected platforms
390 How to install packages with sudo inside the docker container intel/dlstreamer:latest start the container as mentioned above with root-user (-u 0) docker run -it -u 0 --rm... and then are able to update binaries All
392 installation error dlstreamer with openvino 2023.2 2024.0 version supports API 2.0 so I highly recommend to check it and in case if this problem is still valid please raise new issue All
393 Debian file location for DL streamer 2022.3 Error no longer occurring for user All
394 Custom YoloV5m Accuracy Drop in dlstreamer with model proc Procedure to transform crowdhuman_yolov5m.pt model to the openvino version that can be used directly in DLstreamer with Yolo_v7 converter (no layer cutting required) * git clone https://github.com/ultralytics/yolov5 * cd yolov5 * pip install -r requirements.txt openvino-dev * python export.py --weights crowdhuman_yolov5m.pt --include openvino All
396 Segfault when reuse same model with same model-instance-id. 2024.0 version supports API 2.0 so I highly recommend to check it and in case if this problem is still valid please raise new issue All
404 How to generate model proc file for yolov8? Added as a feature in this release All
406 yolox support Added as a feature in this release All
409 ERROR: from element /GstPipeline:pipeline0/GstGvaDetect:gvadetect0: base_inference plugin initialization failed Suggested temporarily - to use a root-user when running the container image, like docker run -it -u 0 [... .add here your other parameters.. ...], to get more permissions All

System Requirements

Please refer to Intel® DL Streamer documentation.

Installation Notes

There are several installation options for Pipeline Framework:

  1. Install Pipeline Framework from pre-built Debian packages
  2. Build Docker image from docker file and run Docker image
  3. Build Pipeline Framework from source code

For more detailed instructions please refer to Intel® DL Streamer Pipeline Framework installation guide.

Samples

The samples folder in Intel® DL Streamer Pipeline Framework repository contains command line, C++ and Python examples.

Legal Information

No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.

Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.

This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps.

The products and services described may contain defects or errors which may cause deviations from published specifications. Current characterized errata are available on request.

Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.

*Other names and brands may be claimed as the property of others.

© 2024 Intel Corporation.