Skip to content

Code and additional information to our paper "RACPIT: Improving Radar Human Activity Classification Using Synthetic Data with Image Transformation"

Notifications You must be signed in to change notification settings

fraunhoferhhi/racpit

 
 

Repository files navigation

RACPIT

NumPy PyTorch Pandas

This repository contains supplementary material for our article "Improving Radar Human Activity Classification Using Synthetic Data with Image Transformation", published in MDPI Sensors as part of the Special Issue "Advances in Radar Sensors". There we introduce RACPIT: Radar Activity Classification with Perceptual Image Transformation, a deep-learning approach to human activity classification using FMCW radar and enhanced with synthetic data.

Background

Radar data

We use Range Doppler Maps (RDMs) as a basis for our input data. These can be either real data acquired with Infineon's Radar sensors for IoT or synthetic using kinematic data with the following model:

$\Large s\left(t\right)=\sum_{k}{\sqrt{\frac{A_{k,t}}{L_{k,t}}}\sin{\left(2\pi f_{k,t}t+\phi_{k,t}\right)}}$

Human reflection model

$A_{k,t}$, $L_{k,t}$, $f_{k,t}$ and $\phi_{k,t}$ represent the radar cross section, free-space path loss, instant frequency and instant phase, respectively, of the returned and mixed-down signal for every modelled human limb $k$ and instant $t$. The latter three parameters depend on the instantaneous distance of the limb to the radar sensor, $d_{k,t}$, and are calculated using the customary radar and FMCW equations.

Simulation animation

We further preprocess the RDMs by stacking them and summing over Doppler and range axis to obtain range and Doppler spectrograms, respectively:

Radar spectrogram extraction

Deep learning

We train our image transformation networks with an adapted version of Perceptual Losses for Real-Time Style Transfer and Super-Resolution.

RACPIT model

Since we are working with radar data, we substitute VGG16 as the perceptual network with our two-branch convolutional neural network from Domain Adaptation Across Configurations of FMCW Radar for Deep Learning Based Human Activity Classification.

If we train the image transformation networks with real data as our input and synthetic data as our ground truth, we obtain a denoising behavior for the image transformation networks.

Implementation

The code has been written for PyTorch based on Daniel Yang's implementation of Perceptual loss.

Data preprocessing is heavily based on xarray. You can take a closer look at it in our example.

Prerequisites

Usage

Radar data can be batch-preprocessed and stored for faster training:

$ python utils/preprocess.py --raw "/path/to/data/raw" --output "/path/to/data/real" --value "db" --marginalize "incoherent"
$ python utils/preprocess.py --raw "/path/to/data/raw" --output "/path/to/data/synthetic" --synthetic --value "db" --marginalize "incoherent"

After this, you can train your CNN, that will serve as a perceptual network:

$ python main.py --log "cnn" train-classify --range --config "I" --gpu 0 --no-split --dataset "/path/to/data/synthetic"

Then you can train the image transformation networks:

$ python main.py --log "trans" train-transfer --range --config "I" --gpu 0 --visualize 5 --input "/path/to/data/real" --output "/path/to/data/synthetic" --recordings first --model "models/cnn.model"

And finally test the whole pipeline:

$ python main.py test --range --config "I" --gpu 0 --visualize 10 --dataset "/path/to/data/real" --recordings last --transformer "models/trans.model" --model "models/cnn.model"

Citation

If you use RACPIT's code or you take the publication as a reference for your research, please cite our work in the following way:

@Article{s22041519,
AUTHOR = {Hernang{\'o}mez, Rodrigo and Visentin, Tristan and Servadei, Lorenzo and Khodabakhshandeh, Hamid and Sta{\'n}czak, S{\l}awomir},
TITLE = {Improving Radar Human Activity Classification Using Synthetic Data with Image Transformation},
JOURNAL = {Sensors},
VOLUME = {22},
YEAR = {2022},
NUMBER = {4},
ARTICLE-NUMBER = {1519},
URL = {https://www.mdpi.com/1424-8220/22/4/1519},
ISSN = {1424-8220},
DOI = {10.3390/s22041519}
}

About

Code and additional information to our paper "RACPIT: Improving Radar Human Activity Classification Using Synthetic Data with Image Transformation"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • HTML 90.8%
  • Python 8.0%
  • Jupyter Notebook 1.2%