Skip to content

yiskw713/asrf

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Alleviating Over-segmentation Errors by Detecting Action Boundaries

This repo is the official implementation of Y. Ishikawa et al. "Alleviating Over-segmentation Errors by Detecting Action Boundaries" in WACV 2021.

Dataset

GTEA, 50Salads, Breakfast

You can download features and G.T. of these datasets from this repository.
Or you can extract their features by yourself using this repository

Requirements

  • Python >= 3.7
  • pytorch => 1.0
  • torchvision
  • pandas
  • numpy
  • Pillow
  • PyYAML

You can download packages using requirements.txt.

pip install -r requirements.txt

Directory Structure

root ── csv/
      ├─ libs/
      ├─ imgs/
      ├─ result/
      ├─ utils/
      ├─ dataset ─── 50salads/...
      │           ├─ breakfast/...
      │           └─ gtea ─── features/
      │                    ├─ groundTruth/
      │                    ├─ splits/
      │                    └─ mapping.txt
      ├.gitignore
      ├ README.md
      ├ requirements.txt
      ├ save_pred.py
      ├ train.py
      └ evaluate.py
  • csv directory contains csv files which are necessary for training and testing.
  • An image in imgs is one from PascalVOC. This is used for an color palette to visualize outputs.
  • Experimental results are stored in results directory.
  • Scripts in utils are directly irrelevant with train.py and evaluate.py but necessary for converting labels, generating configurations, visualization and so on.
  • Scripts in libs are necessary for training and evaluation. e.g.) models, loss functions, dataset class and so on.
  • The datasets downloaded from this repository are stored in dataset. You can put them in another directory, but need to specify the path in configuration files.
  • train.py is a script for training networks.
  • eval.py is a script for evaluation.
  • save_pred.py is for saving predictions from models.

How to use

Please also check scripts/experiment.sh, which runs all the following experimental codes.

  1. First of all, please download features and G.T. of these datasets from this repository.

  2. Features and groundTruth labels need to be converted to numpy array. This repository does not provide boundary groundtruth labels, so you have to generate them, too. Please run the following command. [DATASET_DIR] is the path to your dataset directory.

    python utils/generate_gt_array.py --dataset_dir [DATASET_DIR]
    python utils/generate_boundary_array.py --dataset_dir [DATASET_DIR]
  3. In this implementation, csv files are used for keeping information of training or test data. Please run the below command to generate csv files.

    python utils/make_csv_files.py --dataset_dir [DATASET_DIR]
  4. You can automatically generate experiment configuration files by running the following command. This command generates directories and configuration files in root_dir.

    python utils/make_config.py --root_dir ./result/50salads --dataset 50salads --split 1 2 3 4 5
    python utils/make_config.py --root_dir ./result/gtea --dataset gtea --split 1 2 3 4
    python utils/make_config.py --root_dir ./result/breakfast --dataset breakfast --split 1 2 3 4

    If you want to add other configurations, please add command-line options like:

    python utils/make_config.py --root_dir ./result/50salads --dataset 50salads --split 1 2 3 4 5 --learning_rate 0.1 0.01 0.001 0.0001

    Please see libs/config.py about configurations.

  5. You can train and evaluate models specifying a configuration file generated in the above process like:

    python train.py ./result/50salads/dataset-50salads_split-1/config.yaml
    python evaluate.py ./result/50salads/dataset-50salads_split-1/config.yaml test
  6. You can also save model predictions as numpy array by running:

    python save_pred.py ./result/50salads/dataset-50salads_split-1/config.yaml test
  7. If you want to visualize the saved model predictions, please run:

    python utils/convert_arr2img.py ./result/50salads/dataset-50salads_split1/predictions

License

This repository is released under the MIT License.

Citation

Yuchi Ishikawa, Seito Kasai, Yoshimitsu Aoki, Hirokatsu Kataoka,
"Alleviating Over-segmentation Errors by Detecting Action Boundaries"
in WACV 2021

You can see the paper in arXiv

Reference

  • Colin Lea et al., "Temporal Convolutional Networks for Action Segmentation and Detection", in CVPR2017 (paper)
  • Yazan Abu Farha et al., "MS-TCN: Multi-Stage Temporal Convolutional Network for Action Segmentation", in CVPR2019 (paper, code)

About

Code for ''Alleviating Over-segmentation Errors by Detecting Action Boundaries'' accepted in WACV2021

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published