Skip to content

iHubData-Mobility/public-tree-counting

Repository files navigation

public-tree-counting

Soure code for the paper:

Arpit Bahety, Rohit Saluja, Ravi Kiran Sarvadevabhatla, Anbumani Subramanian and C.V. Jawahar "Automatic Quantification and Visualization of Street Trees." In Proceedings of 12th Indian Conference on Computer Vision, Graphics and Image Processing (ICVGIP’21), Chetan Arora, Parag Chaudhuri,and Subhransu Maji (Eds.). ACM, New York, NY, USA, Article 90. [https://cdn.iiit.ac.in/cdn/cvit.iiit.ac.in/images/ConferencePapers/2021/Automatic_tree.pdf]

Presentation and Poster

Presentation Video | Poster

Demo videos of tree detection and counting results

Demo 1 Demo 2

How to run the code (With visualization)

Input videos will need a GPS metadata file i.e. corresponding .gpx file. Please put the video and its .gpx file in the same folder

  1. Clone the repo. cd into the repo
  2. Create a virtual environement and run requirements.txt
  3. python preprocess.py --video-path {path of your video file} --gpx-filename {gpx filename} --segment-duration {duration of video segments in seconds}
    Example: python preprocess.py --video-path ~/Desktop/GH017798.mp4 --gpx-filename GH017798.gpx --segment-duration 180
    The output of this command will be:

alt text

  1. python run_inference.py --path {path of the folder created in step 3}
    Example: python run_inference.py --path ~/Desktop/GH017798/
  2. python create_gps_points.py --path {path of the folder created in step 3}
    Example: python create_gps_points.py --path ~/Desktop/GH017798/
  3. To generate the Category Map: python create_map.py

To generate the Kernel Density Ranking map (Density Map)

Here we don't need to divide the full video into smaller segments as we don't care about color coding every segment, but we just need to store the counted tree's GPS locations. (As of now we need to separately perform the Density Map generation due to inefficient implementation. This can be improved later). To Note: you need the "points.txt" file for this step which is generated in step 3 in the previous section.

  1. Comment out the following line in detect.py: print(text, file=open('tree_count.txt', "a+"))
  2. python detect.py --source {path of your video file} --weights runs/train/exp7/weights/best.pt
    Example: python preprocess.py --source ~/Desktop/GH017798.mp4 --weights runs/train/exp7/weights/best.pt
    This will give a file as an output - "tree_gps.txt". Copy and paste this file in the folder (for our example): ~/Desktop/GH017798/
  3. python preprocess_kdr.py --path {path to the folder}
    Example: python preprocess_kdr.py --path ~/Desktop/GH017798/ \ This will give a file as an output - "tree_density.txt"
  4. Open R studio and run DR_demo.R

Tree detection results

The final model used by us is YOLOv5l

Model AP@50 MAE TCDCA
Faster RCNN 81.09% 6.12 74.19%
YOLOv4 82.50% 4.35 90.32%
YOLOv5s 79.29% 7.22 67.74%
YOLOv5l 83.74% 3.09 96.77%

Category map

alt text

Density map

alt text

Citation

If you use our code, please cite in the following BibTeX format:

@inproceedings{bahety2021automatic,
  title={Automatic {Q}uantification and {V}isualization of {S}treet {T}rees},
  author={Bahety, Arpit and Saluja, Rohit and Sarvadevabhatla, Ravi Kiran and Subramanian, Anbumani and Jawahar, CV},
  booktitle={Proceedings of the Twelfth Indian Conference on Computer Vision, Graphics and Image Processing},
  pages={1--9},
  year={2021}
}