Skip to content

marwankefah/Kalman_Tracking_Single_Camera

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Kalman Filter Tracker

Unofficial implementation of SORT, A simple online and realtime tracking algorithm for 2D multiple object tracking in video sequences.

SORT is based on the idea of a discrete Kalman Filter,representing each object to be tracked.

Kalman Filter estimates the state vector X of a discrete-time controlled process that is governed by the linear stochastic difference equation.


Using Kalman Tracker (SORT) with YOLOV4

Colab Example Enable GPU (Oxford TownCentre) Open In Colab

Prerequisites

  • Numpy
  • scipy.optimize.linear_sum_assignment

result1

  • removeTrackAfternFramesThres - Number of frames to try to match a missed object before you delete it
  • uncertaintyCount - Number of frames to consider an object as a confirmed tracker (an object can be Confirmed ("C"), UnCertain ("U), Missed ("M"))
$ git clone https://github.com/marwankefah/Kalman-Tracking-Single-Camera
 from Kalman_Tracking_Single_Camera.src import tracking
 from Kalman_Tracking_Single_Camera.src import helpers as hp 
 tracker=tracking.KalmanTracking(IOUThreshold=0.3 ,removeTrackAfternFramesThres=40,uncertaintyCount=1)
 
LOOP 
 GET DETECTION FROM YOUR DETECTION MODEL [[xminNew, yminNew, xmaxNew, ymaxNew],....]
 ## Return trackers in the form [[id,[xminNew, yminNew, xmaxNew, ymaxNew]],...] that matches the confirmed state
 trackers=tracker.match(detections,state="C")
END LOOP    

Tracking with constant velocity (linear observation model)

Equation of Motion

  • y'' (t)= 0 (acceleration a) (constant Velocity model)

  • y'(t)=y'(to) - a(t-to)

  • y(t)= y(to) + y'(to) (t-to) - a/2 (t-to)^2

State Representation

  • X = A Xt-1 + Bt Ut + Rt

  • Z = Ct Xt + Qt

    • X (state of persons in the system)
    • z (measurements taken from Object Detection Model)
    • A (Matrix that maps previous state to current State)
    • B (Matrix that maps actions to states)
    • U (Actions Taken)
    • C (Matrix that maps mesurement to state)
    • R (Process Noise, noise comming from the motion model)
    • Q (Measurement Noise, noise comming from the object detection model)

Q and R covariance matrices are assumed to be independent and normally distributed. Q is chosen to be small as it correlate with how well your Object Detection Model Performs. R was assumed and adopted from the official Repository of SORT. However, someone can try to estimate such matrix from this paper.

  • State X is (nx1), where n=8 (dimensions) =[X,Y,A,H,Vx,Vy,Va,Vh]

  • Measurement Z vector is [X,Y,A,H] (Object Detection Model)

    • X (bounding box center position along the x axis)
    • Y (bounding box center position along the y axis)
    • A (Aspect Ratio of the bounding box calculaed by Width/Height)
    • H (Height of the bounding box)
    • Vx, Vy, Va,Vh are the rate of change of the above variables (velocity) respectively.

About

Unofficial implementation of SORT, A simple online and real-time tracking algorithm for 2D multiple objects tracking in video sequences, with YOLOV4 (DarkNet) Example.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages