Skip to content

WillCZhang/Tensorflow-Object-Detector

Repository files navigation

Object Detector

The detector is built based on tutorials from Training Custom Object Detector and How to train your own Object Detector with TensorFlow’s Object Detector API. Both tutorials are awesome, but configuring TensorFlow Object Detector is just too painful. So I wrote this object detector for everyone who enjoys seeing results rather than fighting with tools. It's still recommanded to follow the tutorials to learn the process, but all you really need to run this detector is a bunch of images and docker.

Note: the detector is mainly designed for people who are starting from only having images. If you already have a dataset created, you may need to change some scripts (but that process should be pretty easy).

Workflow

This is the workflow for you to start with nothing and end with a trained model. Please refer to tutorials if any terminalogy looks wired to you.

  1. Install Detector To install the detector, you will need docker installed in your system. Once you have done that, simply run ./build.sh, it will create a docker image tagged object-detector.
  2. Label Images You will need a bunch of images for training, and for each image, you need to add labels to it to indicate an object (i.e. having an XML file associated with the JPG file). I used labelImg for image labeling, and it's good enough for me, just remember to save it!!!
  3. Create Dataset As mentioned in the tutorials, you will need TFRecord format for you data.
    1. After you labeled the images, open the configuration file, change the LABELED_IMAGE_PATH variable to the path of labeled images.
    2. The CLASSES variable in the configuration file specifies the classes (labels) you want to be detected, split classes by comma.
    3. You will also need a path to hold the dataset (consider creating a new folder for it). You can specify that path by modifying DATA_PATH variable in the configuration file.
    4. (Optional) If only a part of the image contains the object you desired (useful if the desired object is too small or you want multiple stage detection), for each image, you can specify a <image_name>.crop file to indicate the bounding box of the part you are interested in. Note, this crop file should only contain a single detection box format generated by detector/detect.py.
    5. Now you can run ./create-dataset.sh, it will create three folders under the <DATA_PATH> you specified.
      • <DATA_PATH>/images for your labeled images
      • <DATA_PATH>/training for TF_Record, label_map, and training images
      • <DATA_PATH>/testingfor TF_Record, label_map, and testing images
  4. Prepare Pre-trained Model As mentioned in the tutorials, you will need a pre-trained model to start training.
    1. Before you start, you need to change MODEL_PATH variable in the configuration file to a folder that will be holding your models. Consider creating a new folder for it.

    2. Download a pre-trained model from TensorFlow’s detection model zoo, then unzip it. For this project, you must rename the model folder and save it as <MODEL_PATH>/model.

    3. Because the pre-trained model contains checkpoints that exceeded max_step (i.e. the model will not run), simply remove checkpoint file inside the model folder.

    4. Configure a pipeline.config file corresponding to the model path and data path. You must also save the config file to <MODEL_PATH>/pipeline.config. For this detector, DATA_PATH is mounted as /data/ and MODEL_PATH is mounted as /model/, so anytime you want to refer to something inside these paths, you should use to their mounted paths. For example, you should configure it as the following:

          fine_tune_checkpoint`: "/model/model/model.ckpt"
          ...
          tf_record_input_reader { # for all occasion
              input_path: "/data/<training or testing>/tf.record"
          }
          label_map_path: "/data/<training or testing>/label_map.pbtxt"
      

      Note that tf.record and label_map.pbtxt are defined in detector/prepare_data.py. Please refer to configuring-a-training-pipeline for more details.

  5. Start Training Now we are ready! Run ./train.sh, you will enter the training shell, type in train, press enter, start the training! You can also open browser http://localhost:6006/ to observe the training status!
  6. Save Your Model Once you are happy with the detection results (by looking at the visualizations), you should now stop training and save the model. Use Ctrl+C to quite the training process, then type in save to save the newest model will be saved to <MODEL_PATH>/saved_model. Note: please remove it if you want to retrain your model.
  7. Use Your Model To use your model, you must put all the images you want to detect under <DATA_PATH>/to_detect (this is definied in detector/detect.py). After that, you can run ./detect.sh, then type in detect to start execute detection on your images.

About

Train your object detector without fighting with tools!

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published