Skip to content

This project is part of the CS course 'Systems Engineering Meets Life Sciences II' at Goethe University Frankfurt. In this Computer Vision project, we developed a first prototype of a security system which uses the surveillance cameras at subway stations to recognize dangerous situations. The training data was artificially generated by a Unity-b…

License

Notifications You must be signed in to change notification settings

Psarpei/Subway-Station-Hazard-Detection

Repository files navigation

Subway Station Hazard Detection, Goethe University Frankfurt (Spring 2020)

General Information

Instructors:

Institutions:

Project team (A-Z):

  • Pascal Fischer
  • Felix Hoffman
  • Edis Kurtanovic
  • Martin Ludwig
  • Alen Smajic

Publications

Tools

  • Python 3
  • PyTorch Framework
  • Unity3D
  • C#
  • Blender
  • OpenCV

Project Description

Recently there was a nationwide scandal about an incident at Frankfurt Central Station in which a boy was pushed onto the railroad tracks in front of an arriving train and lost his life due to its injuries. However, this incident is not the only one of its kind because such accidents occur from time to time at train stations. At the moment there are no security systems to prevent such accidents, and cameras only exist to provide evidence or clarification after an incident has already happened. In this project, we developed a first prototype of a security system which uses the surveillance cameras at subway stations in combination with the latest deep learning computer vision methods and integrates them into an end-to-end system, which can recognize dangerous situations and initiate measures to prevent further consequences. Furthermore, we present a 3D subway station simulation, developed in Unity3D, which generates entire train station environments and scenes using an advanced algorithm trough a real data-based distribution of persons. This simulation is then used to generate training data for the deep learning model.

Subway Station Simulation

For our simulation we developed 10 different types of subway stations, covering the most common station architectures. The number of platforms varies between 1 and 2, while the number of tracks lies between 1 and 4, for each station. Every station type was manually textured in 5 different variations to get a total of 50 unique station environments.

Furthermore, we include a variety of different human models as well as station objects like benches, snack machines, stairs, rubbish bins etc. Using our Script-UI you can further expand the amount of different station objects and station types. Once you press on the Unity play button, the algorithm starts to generate the station environments by randomly generating and placing the human models and station objects along the subway station. Once the scenario is generated, the algorithm takes a screenshot and saves the image to a predefined folder within the project folder. Since we are training a semantic segmentation algorithm we also need to generate the ground truth labels. To do so, our algorithm replaces all station objects and the station itself with white textured versions of those objects. The human models are replaced with green, yellow and red versions of the human models based on their location within the subway station. If they are staying on the railroads, they are painted in red. If they are staying in front of the security line, they are painted in yellow. In all other cases, the human models are replaced with green ones. Finally, our algorithm takes another screenshot and stores the image as the ground truth label in a separate folder within the project folder.









On the right side you can see our Script-UI which is used to controll the simulation. It contains a camera object which is used to take the screenshots. Upon activating the first checkbox "Use Distribution" the simulation produces a more realistic scenario where the persons are distributed along the station using a real data-based distribution of persons (you can read more about it in the report). The second checkbox "Create Samples" is used to create a random scene from the simulation and to freeze it (this is mostly used for testing purposes once we add new objects to the simulation). Because of memory space issues we had to implement our algorithm to work only at one station type (out of 50) at the time. You can specify which station type should be used as background in the last option called "Type Index". Furthermore, you can specify how many different station environments should be generated ("Number of Types" option) and how many different scenes with people should be generated ("Number of Scenes" option). The following 8 options (starting with "Min Persons" and ending with "Max Snacks") are used to threshold the algorithm to how many instances of the different object classes should be generated. The min options specify the minimum number and the max options specify the maximum number of objects which are generated for the scenario. The algorithm picks randomly a number in between. The following dropdown options are simple lists which are used to store each gameobject which will be used for generating the scene. It is very important that every gameobject is assigned to the correct list. Notice that there are 4 different "Chars" lists. This is because every human model has to contain also its green, yellow and red painted twin in the segmentation scenario. This also applies for other objects from the station, which are painted in white because they represent the background (these objects are stored in the "Target" lists).

Datasets

  • You can download our dataset with randomly distributed chrachters here
  • You can download our dataset with a real data-based distribution of charachters here

Semantic Segmentation using SegNet

For detecting dangerous situations on subway station, we are using semantic segmentation in order to classify each pixel of an image to one of the following classes:

  • white - background
  • black - security line
  • green - characters in safe area
  • yellow - characters crossing the security line
  • red - characters in the dangerous area (railroads)

Training

For training the SegNet there are 2 scripts available:

  • SUBWAY_SEGMENTATION.py
  • Subwaystation_Segmentation.ipynb

For both you need a folder with input images and another with the target images. To start the training you need to execute:

python3 SUBWAY_SEGMENTATION.py

with the following parameters

  • --input path to img input data directory
  • --target path to img target data directory
  • --content path where the train/validation tensors, model_weights, losses and validation results will be saved, default="/"
  • --train_tensor_size number of images per training_tensor (should be True: train_tensor_size % batch_size == 0)
  • --val_tensor_size number of images per training_tensor (should be True: train_tensor_size % batch_size == 0)
  • --num_train_tensors number of train tensors (should be True: train_tensor_size * num_train_tensors + val_tensor_size == |images|)
  • --model_weights path where your model weights will be loaded, if not defined new weights will be initialized
  • --epochs number of training epochs, default=50
  • --batch_size batch size for training, default=8
  • --learn_rate learning rate for training, default=0.0001
  • --momentum momentum for stochastic gradient descent, default=0.9
  • --save_cycle save model, loss, validation every save_cycle epochs, default=5
  • --weight_decay weight_decay for stochastic gradient descent, default= 4e5

example execution with 41.000 input/target images:

python3 test_parse.py --input=data/training --target=data/target --content=data/output --train_tensor_size=2000 --val_tensor_size=1000 --num_train_tensors=20  model_weights=model.pt

configuration from example execution:

  • input_path=data/training
  • target_path=data/target
  • content_path=data/output
  • batch_size=2000
  • train_tensor_size=20
  • val_tensor_size=1000
  • num_train_tensors=20
  • model_weights=model.pt
  • load_model=True
  • learn_rate=0.0001
  • momentum=0.9
  • weight_decay=400000.0
  • total_epochs=50
  • save_cycle=5

The Subwaystation_Segmentation.ipynb is the equivalent version for google-colab. You can set all the parameters in the configuration cell.

Open In Colab

Predict

To predict from the trained model we provide the google-colab notebook Subway_Segmentation_Predict.ipynb, which is self explanatory.

You only have to change the following paths:

  • model weights
  • input image
  • target image

check it out

Open In Colab

Model Weights

You can download the model weights here

Results

About

This project is part of the CS course 'Systems Engineering Meets Life Sciences II' at Goethe University Frankfurt. In this Computer Vision project, we developed a first prototype of a security system which uses the surveillance cameras at subway stations to recognize dangerous situations. The training data was artificially generated by a Unity-b…

Topics

Resources

License

Stars

Watchers

Forks