Skip to content

zzndream/ShipRSImageNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

47 Commits
 
 
 
 
 
 

Repository files navigation

ShipRSImagaeNet: A Large-scale Fine-Grained Dataset for Ship Detection in High-Resolution Optical Remote Sensing Images

python OpenCV Apache

Description

ShipRSImageNet is a large-scale fine-grainted dataset for ship detection in high-resolution optical remote sensing images. The dataset contains 3,435 images from various sensors, satellite platforms, locations, and seasons. Each image is around 930×930 pixels and contains ships with different scales, orientations, and aspect ratios. The images are annotated by experts in satellite image interpretation, categorized into 50 object categories images. The fully annotated ShipRSImageNet contains 17,573 ship instances. There are five critical contributions of the proposed ShipRSImageNet dataset compared with other existing remote sensing image datasets.

  • Images are collected from various remote sensors cover- ing multiple ports worldwide and have large variations in size, spatial resolution, image quality, orientation, and environment.

  • Ships are hierarchically classified into four levels and 50 ship categories.

  • The number of images, ship instances, and ship cate- gories is larger than that in other publicly available ship datasets. Besides, the number is still increasing.

  • We simultaneously use both horizontal and oriented bounding boxes, and polygons to annotate images, providing detailed information about direction, background, sea environment, and location of targets.

  • We have benchmarked several state-of-the-art object detection algorithms on ShipRSImageNet, which can be used as a baseline for future ship detection methods.

Examples of Annotated Images

image

Image Source and Usage License

The ShipRSImageNet dataset collects images from a variety of sensor platforms and datasets, in particular:

  • Images of the xView dataset are collected from WorldView-3 satellites with 0.3m ground resolution. Images in xView are pulled from a wide range of geographic locations. We only extract images with ship targets from them. Since the image in xView is huge for training, we slice them into 930×930 pixels with 150 pixels overlap to produce 532 images and relabeled them with both horizontal bounding box and oriented bounding box.

  • We also collect 1,057 images from HRSC2016 and 1,846 images from FGSD datasets, corrected the mislabeled and relabeled missed small ship targets.

  • 21 images from the Airbus Ship Detection Challenge.

  • 17 images from Chinese satellites suchas GaoFen-2 and JiLin-1.

Use of the Google Earth images must respect the "Google Earth" terms of use.

All images and their associated annotations in ShipRSImageNet can be used for academic purposes only, but any commercial use is prohibited.

Object Category

The ship classification tree of proposed ShipRSImageNet is shown in the following figure. Level 0 distinguish whether the object is a ship, namely Class. Level 1 further classifies the ship object category, named as Category. Level 2 further subdivides the categories based on Level 1. Level 3 is the specific type of ship, named as Type.

image

At Level 3, ship objects are divided into 50 types. For brevity, we use the following abbreviations: DD for Destroyer, FF for Frigate, LL for Landing, AS for Auxiliary Ship, LSD for Landing Ship Dock, LHA for Landing Heli- copter Assault Ship, AOE for Fast Combat Support Ship, EPF for Expeditionary Fast Transport Ship, and RoRo for Roll- on Roll-off Ship. These 50 object classes are Other Ship, Other Warship, Submarine, Other Aircraft Carrier, Enterprise, Nimitz, Midway, Ticonderoga, Other Destroyer, Atago DD, Arleigh Burke DD, Hatsuyuki DD, Hyuga DD, Asagiri DD, Other Frigate, Perry FF, Patrol, Other Landing, YuTing LL, YuDeng LL, YuDao LL, YuZhao LL, Austin LL, Osumi LL, Wasp LL, LSD 41 LL, LHA LL, Commander, Other Auxiliary Ship, Medical Ship, Test Ship, Training Ship, AOE, Masyuu AS, Sanantonio AS, EPF, Other Merchant, Container Ship, RoRo, Cargo, Barge, Tugboat, Ferry, Yacht, Sailboat, Fishing Vessel, Oil Tanker, Hovercraft, Motorboat, and Dock.

Dataset Download

Benchmark Code Installation

We keep all the experiment settings and hyper-parameters the same as depicted in MMDetection(v2.11.0) config files except for the number of categories and parameters. MMDe- tection is an open-source object detection toolbox based on PyTorch. It is a part of the OpenMMLab project developed by Multimedia Laboratory, CUHK.

This project is based on MMdetection(v2.11.0). MMDetection is an open source object detection toolbox based on PyTorch. It is a part of the OpenMMLab project.

Prerequisites

  • Linux or macOS (Windows is in experimental support)
  • Python 3.6+
  • PyTorch 1.3+
  • CUDA 9.2+ (If you build PyTorch from source, CUDA 9.0 is also compatible)
  • GCC 5+
  • MMCV

Installation

  • Install MMdetection following the instructions. We are noting that our code is checked in mmdetection V2.11.0 and pytorch V1.7.1.

    • Create a conda virtual environment and activate it.

      conda create -n open-mmlab python=3.7 -y
      conda activate open-mmlab
    • Install PyTorch and torchvision following the official instructions, e.g.,

      conda install pytorch torchvision -c pytorch

      Note: Make sure that your compilation CUDA version and runtime CUDA version match. You can check the supported CUDA version for precompiled packages on the PyTorch website.

    • Install mmcv-full, we recommend you to install the pre-build package as below.

      pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/{cu_version}/{torch_version}/index.html

      Please replace {cu_version} and {torch_version} in the url to your desired one. For example, to install the latest mmcv-full with CUDA 11 and PyTorch 1.7.1, use the following command:

      pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu110/torch1.7.1/index.html
    • Download this benchmark code.

      git clone https://github.com/open-mmlab/mmdetection.git
      cd mmdetection2.11-ShipRSImageNet
    • Install build requirements and then install MMDetection.

      pip install -r requirements/build.txt
      pip install -v -e .  # or "python setup.py develop"

Train with ShipRSImageNet

  • Download the ShipRSImageNet dataset. It is recommended to symlink the ShipRSImageNet dataset root to $mmdetection2.11-ShipRSImageNet/data:

    ln -s $dataset/ShipRSImageNet/ $mmdetection2.11-ShipRSImageNet/data/
  • If your folder structure is different, you may need to change the corresponding paths in config files.

  • mmdetection2.11-ShipRSImageNet
    ├── mmdet
    ├── tools
    ├── configs
    ├── data
    │   ├── ShipRSImageNet
    │   │   ├── COCO_Format
    │   │   ├── masks
    │   │   ├── VOC_Format
    │   │   │   ├── annotations
    │   │   │   ├── ImageSets
    │   │   │   ├── JPEGImages
  • Prepare a config file:

    • The benchamark config file of ShipRSImageNet already in the following:

      • $mmdetection2.11-ShipRSImageNet/configs/ShipRSImageNet/
  • Example of train a model with ShipRSImageNet:

    • python tools/train.py configs/ShipRSImageNet/faster_rcnn/faster_rcnn_r50_fpn_100e_ShipRSImageNet_Level0.py

Models trained on ShipRSImageNet

We introduce two tasks: detection with horizontal bounding boxes (HBB for short) and segmentation with oriented bounding boxes (SBB for short). HBB aims at extracting bounding boxes with the same orientation of the image, it is an Object Detection task. SBB aims at semantically segmenting the image, it is a Semantic Segmentation task.

The evaluation protocol follows the same mAP and mAR of area small/medium/large and mAP(@IoU=0.50:0.95) calculation used by MS-COCO.

Level 0

Model Backbone Style HBB mAP SBB mAP Extraction code Download
Faster RCNN with FPN R-50 Pytorch 0.550 2vrm model
Faster RCNN with FPN R-101 Pytorch 0.546 f362 model
Mask RCNN with FPN R-50 Pytorch 0.566 0.440 24eq model
Mask RCNN with FPN R-101 Pytorch 0.557 0.436 lbcb model
Cascade Mask RCNN with FPN R-50 Pytorch 0.568 0.430 et6m model
SSD VGG16 Pytorch 0.464 qabf model
Retinanet with FPN R-50 Pytorch 0.418 7qdw model
Retinanet with FPN R-101 Pytorch 0.419 vdiq model
FoveaBox R-101 Pytorch 0.453 urbf model
FCOS with FPN R-101 Pytorch 0.333 94ub model

Level 1

Model Backbone Style HBB mAP SBB mAP Extraction code Download
Faster RCNN with FPN R-50 Pytorch 0.366 - 5i5a model
Faster RCNN with FPN R-101 Pytorch 0.461 - 6ts7 model
Mask RCNN with FPN R-50 Pytorch 0.456 0.347 9gnt model
Mask RCNN with FPN R-101 Pytorch 0.472 0.371 wc62 model
Cascade Mask RCNN with FPN R-50 Pytorch 0.485 0.365 a8bl model
SSD VGG16 Pytorch 0.397 - uffe model
Retinanet with FPN R-50 Pytorch 0.368 - lfio model
Retinanet with FPN R-101 Pytorch 0.359 - p1rd model
FoveaBox R-101 Pytorch 0.389 - kwiq model
FCOS with FPN R-101 Pytorch 0.351 - 1djo model

Level 2

Model Backbone Style HBB mAP SBB mAP Extraction code Download
Faster RCNN with FPN R-50 Pytorch 0.345 - 924l model
Faster RCNN with FPN R-101 Pytorch 0.479 - fb1b model
Mask RCNN with FPN R-50 Pytorch 0.468 0.377 so8j model
Mask RCNN with FPN R-101 Pytorch 0.488 0.398 7q1g model
Cascade Mask RCNN with FPN R-50 Pytorch 0.492 0.389 t9gr model
SSD VGG16 Pytorch 0.423 - t1ma model
Retinanet with FPN R-50 Pytorch 0.369 - 4h0o model
Retinanet with FPN R-101 Pytorch 0.411 - g9ca model
FoveaBox R-101 Pytorch 0.427 - 8e12 model
FCOS with FPN R-101 Pytorch 0.431 - 0hl0 model

Level 3

Model Backbone Style HBB mAP SBB mAP Extraction code Download
Faster RCNN with FPN R-50 Pytorch 0.375 - 7qmo model
Faster RCNN with FPN R-101 Pytorch 0.543 - bmla model
Mask RCNN with FPN R-50 Pytorch 0.545 0.450 a73h model
Mask RCNN with FPN R-101 Pytorch 0.564 0.472 7k9i model
Cascade Mask RCNN with FPN R-50 Pytorch 0.593 0.483 ebga model
SSD VGG16 Pytorch 0.483 - otu5 model
Retinanet with FPN R-50 Pytorch 0.326 - tu5a model
Retinanet with FPN R-101 Pytorch 0.483 - ptv0 model
FoveaBox R-101 Pytorch 0.459 - 1acn model
FCOS with FPN R-101 Pytorch 0.498 - 40a8 model

Development kit

The ShipRSImageNet Development kit is based on DOTA Development kit and provides the following function

  • Load and image, and show the bounding box on it.

  • Covert VOC format label to COCO format label.

Citation

If you make use of the ShipRSImageNet dataset, please cite our following paper:

Z. Zhang, L. Zhang, Y. Wang, P. Feng and R. He, "ShipRSImageNet: A Large-Scale Fine-Grained Dataset for Ship Detection in High-Resolution Optical Remote Sensing Images," in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, vol. 14, pp. 8458-8472, 2021, doi: 10.1109/JSTARS.2021.3104230.

Contact

If you have any the problem or feedback in using ShipRSImageNet, please contact:

License

ShipRSImageNet is released under the Apache 2.0 license. Please see the LICENSE file for more information.

About

ShipRSImageNet is the largest ship detection dataset in the Computer Vision and Earth Vision communities.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published