Skip to content
This repository has been archived by the owner on May 28, 2021. It is now read-only.
/ augmented-maskrcnn Public archive

Object detection and instance segmentation on MaskRCNN with torchvision, albumentations, tensorboard and cocoapi. Supports custom coco datasets with positive/negative samples.

Notifications You must be signed in to change notification settings

fcakyon/augmented-maskrcnn

Repository files navigation

Augmented MaskRCNN

CI

This repo lets you easily fine tune pretrained MaskRCNN model with 64 fast image augmentation types using your custom data/annotations, then apply prediction based on the trained model. Training and inference works both on Windows & Linux.

  • torchvision is integrated for MaskRCNN training which provides faster convergence with negative sample support
  • albumentations is integrated for image augmentation which is much faster than imgaug and supports 64 augmentation types for images, bounding boxes and masks
  • torch-optimizer is integrated to support AdaBound, Lamb, RAdam optimizers.
  • tensorboard is integrated for visualizing the training/validation losses, category based training/validation coco ap results and iteration based learning rate changes
  • Pretrained resnet50 + feature pyramid weights on COCO is downloaded upon training
  • COCO evaluation is performed after each epoch, for training and validation sets for each category

Installation

Or install Miniconda (Python3) by bash script on Linux:

sudo apt update --yes
sudo apt upgrade --yes
wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh -O ~/miniconda.sh
bash ~/miniconda.sh -b -p ~/miniconda
rm ~/miniconda.sh
  • Inside the base project directory, open up a terminal/anaconda promt window and create environment:
conda env create -f environment.yml
  • After environment setup, activate environment and run the tests to see if everything is ready:
conda activate augmented-maskrcnn
python -m unittest

Usage

  • In the base project directory, open up a terminal/anaconda promt window, and activate environment:
conda activate augmented-maskrcnn
  • Create a config yml file similar to default_config.yml for your needs

  • Perform training by giving the config path as argument:

python train.py configs/default_config.yml
  • Visualize realtime training/validation losses and accuracies via tensorboard from http://localhost:6006 in your browser:
tensorboard --logdir=experiments
  • Perform prediction for image "CA01_01.tif" using model "artifacts/maskrcnn-best.pt":
python predict.py CA01_01.tif artifacts/maskrcnn-best.pt

Releases

No releases published

Packages

No packages published

Languages