Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multi-camera support #293

Draft
wants to merge 38 commits into
base: master
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
38 commits
Select commit Hold shift + click to select a range
2daf7ba
Define a UI to initialize a coordinate transformation
Feb 2, 2024
b912a56
Use motion estimator and mask generator when using videos
Feb 5, 2024
a0402f5
Add Ignore button
Feb 6, 2024
3bd7833
Add button to reset video (go to frame 1)
Feb 6, 2024
568ccc5
Delete points when changing the image
Feb 6, 2024
ad5e190
Add logo to UI
Feb 6, 2024
411718f
Add gray background to canvas
Feb 6, 2024
260cba3
Create method to associate trackers from different cameras
Feb 7, 2024
4951bb3
Allow for different motion estimators for reference and footage
Feb 7, 2024
18a6baa
Make consistent writing in annotations buttons
Feb 7, 2024
70c7b97
Make transformation available in the transformation getter
Feb 7, 2024
edc1321
Swap absolute and relative
Feb 8, 2024
f1eaf56
Fix error whenever no tracker is passed to the clusterizer
Feb 8, 2024
06bf546
Add rel_to_abs transformation in tracker
Feb 8, 2024
55e2b61
Change place of the logo
Feb 8, 2024
e19e6e0
Create demo
Feb 8, 2024
d23466b
Decrease one vote when growing or splitting
Feb 8, 2024
7b8a286
Add a few warnings, and fix the output size
Feb 8, 2024
604c679
Add reference name in the pickle
Feb 8, 2024
4c166c6
Add first version of README
Feb 8, 2024
6002348
Add option to set image width and height in the UI
Feb 9, 2024
f9182f6
Add buttons to resize images
Feb 21, 2024
ee91381
When loading transformation, update finish button
Feb 21, 2024
3236a0d
Adjust the way the ids are assigned when growing/splitting
Feb 28, 2024
3a2c9e9
Remove pickles in demo, add invert button to UI
Feb 28, 2024
34b14ec
Add an initialization delay for the clusters
Feb 28, 2024
e001d57
Change default clusterizer delay
Feb 29, 2024
5c7b4c5
Update multi camera documentation
Mar 1, 2024
20949c0
Dockerize demo
Mar 1, 2024
912e0f6
Round FPS to 2 decimal digits
Mar 5, 2024
d5347e0
Combine embeddings with spatial distance
Mar 6, 2024
93340cf
Make it work with reid
Mar 8, 2024
cdd3e58
Add option to only use alive objects
Mar 8, 2024
6eb462e
Don't use distance functions for unmatched trackers
Mar 8, 2024
7c187a2
Keep id in cluster with greatest hit_counter when splitting
Mar 9, 2024
b15f4a3
Update id criteria for splitting
Mar 9, 2024
2680003
Use different absolute frames per video, and combine to clusterize
aguscas Mar 21, 2024
d871031
Fix bug of Nonetype cluster referenced when splitting
aguscas Mar 22, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
6 changes: 6 additions & 0 deletions demos/multi_camera/Dockerfile
@@ -0,0 +1,6 @@
FROM ultralytics/yolov5:v6.2

# Install Norfair
RUN pip install git+https://github.com/tryolabs/norfair.git@master#egg=norfair

WORKDIR /demo/src/
64 changes: 64 additions & 0 deletions demos/multi_camera/README.md
@@ -0,0 +1,64 @@
# Multi-Camera Demo

In this example, we show how to associate trackers of different synchronized videos in Norfair.

Why would we want that?

- When subjects that are being tracked go out of frame in one video, you might still be able to track them and recognize that it is the same individual if it is still visible in other videos.
- Take footage from one or many videos to a common reference frame. For example, if you are watching a soccer match, you might want to combine the information from different cameras and show the position of the players from a top-down view.

## Example 1: Associating different videos

This method will allow you to associate trackers from different footage of the same scene. You can use as many videos as you want.

```bash
python3 demo.py video1.mp4 video2.mp4 video3.mp4
```

A UI will appear to associate points in `video1.mp4` with points in the other videos, to set `video1.mp4` as a common frame of reference.

If the videos move, you should also use the `--use-motion-estimator-footage` flag to consider camera movement.

## Example 2: Creating a new perspective

This method will allow you to associate trackers from different footage of the same scen, and create a new perspective of the scene which didn't exist in those videos. You can use as many videos as you want, and also you need to provide one reference (either an image or video) corresponding to the new perspective. In the soccer example, the reference could be a cenital view of a soccer field.

```bash
python3 demo.py video1.mp4 video2.mp4 video3.mp4 --reference path_to_reference_file
```

As before, you will have to use the UI.

If the videos where you are tracking have camera movement, you should also use the `--use-motion-estimator-footage` flag to consider camera movement in those videos.

If you are using a video for the reference file, and the camera moves in the reference, then you should use the `--use-motion-estimator-reference` flag.


For additional settings, you may display the instructions using `python demo.py --help`.


## UI usage

The UI has the puropose of annotating points that match in the reference and the footage (either images or videos), to estimate a transformation.

To add a point, just click a pair of points (one from the footage window, and another from the reference window) and select `"Add"`.
To remove a point, just select the corresponding point at the bottom left corner, and select `"Remove"`.
You can also ignore points, by clicking them and selecting `"Ignore"`. The transformation will not used ingored points.
To 'uningnore' points that have been previously ignored, just click them and select `"Unignore"`.

To resize the footage or the reference image, you can use the `"+"` and `"-"` buttons in the `'Resize footage'` and `'Resize reference'` sections of the Menu.

If either footage or reference are videos, you can jump to future frames to pick points that match.
For example, to jump 215 frames in the footage, just write that number next to `'Frames to skip (footage)'`, and select `"Skip frames"`.

You can go back to the first frame of the video (in either footage or reference) by selecting "Reset video".

Once a transformation has been estimated (you will know that if the `"Finished"` button is green), you can test it:
To Test your transformation, Select the `"Test"` mode, and pick a point in either the reference or the footage, and see the associated point in the other window.
You can go back to the `"Annotate"` mode keep adding more associated points until you are satisfied with the estimated transformation.

You can also save the state (points and transformation you have) to a `.pkl` file using the `"Save"` button, so that you can later load that state from the UI with the `"Load"` button.

You can swap the reference points with the footage points (inverting the transformation) with the `"Invert"` button. This is particularly useful if you have previously saved a state in which the reference was the current footage, and the footage was the current reference.

Once you are happy with the transformation, just click on `"Finished"`.
1 change: 1 addition & 0 deletions demos/multi_camera/requirements.txt
@@ -0,0 +1 @@
yolov5==6.1.8
8 changes: 8 additions & 0 deletions demos/multi_camera/run_gpu.sh
@@ -0,0 +1,8 @@
#!/usr/bin/env -S bash -e
docker build . -t norfair-multicamera
docker run -it --rm \
--gpus all \
--shm-size=1gb \
-v `realpath .`:/demo \
norfair-multicamera \
bash