Skip to content

IA BEE, an object recognition model made to recognize queen bees inside their hives.

Notifications You must be signed in to change notification settings

finaldzn/BHCORP

Repository files navigation

BEEIA Model

poster


A general explanation of the project can be found here, with specifics concerning performance and the choices that were done to come up with this result :

Projet Closure Report

Summary :

  • Getting all the files ready
  • Give data to the database
  • Start Training
  • Testing
  • Export the model to TFLITE and android
  • Future evolutions

This is a work has been done by :

This work has been done for the company BEEIA and for ESILV in the context of pi² projects.

This file goes into the specific concerning the object recognition model, for more information concerning the app or the connected hive please visit their associated pages.


Setting up everything

To continue we are going to need a few files :

  • the data wich is pictures of queen bees labelized and organised with pascalvoc.

    • Google Drive, the data can only be upload to google drive as github has a 100mb limit on files.
  • the last trained model, so you can start again from a check point

Those file are available on this shared drive, I advice you make a local copy of it so you can work on your own.

  • Copy the google colab file and create your own so you can add your modifications. The file is also present in the notebook folder

Please refer to the comments on the colab file to get the hang of it there.


Give data to the database

  1. Find a video (youtube or else) with queen bees
  2. Open the video with the labelling software VOTT
  3. Labelize frame by frame the queen bees and everything else you want the model to be able to recognize.
  4. Export your data under the "pascal_voc" format.

Create the inference graph and model

You can use the notebook for everything regarding the model

There is the all_in_one notebook that details each step if you want everything in one file.

Create the records

To create the records (you can skip this part as the results are in the data folder), the output should be a pascalvoc_training.record file :

Train the model

To start training on the models, the output should be a folder called inference graph :

Convert into Tflite

To convert the obtained model (works only with SSD models), the output should be a model.tflite file :


Testing

They are a few files you can use for testing your model and do predictions. Those files are known to work with the RNN model, the SSD model has issue working with those files.

py predictions/object_dection_image.py

For example, this file will allow you to predict on an image, you need to specify the path for the inference graph and for the picture. Some test pictures are available in the ./testfiles folder

RNN model prediction rnn


Export the model to TFLITE and android

For the export to TFLITE please follow the instructions on the colab file.

Once you have your model.tflite file you can then upload it to the BEEIA android app and change it. At the next launch the app should have the updated model

More instructions are available there.

SSD model prediction

ssd


Future evolutions

The model at the time of writing had a few issues. A low recognition on the app (80% max).

I think a few evolutions would make the model better here it is :

  • Add a few regular bee picture to the model and train it on them so the model can differantiate more easily between regular bees and queen bees
  • Figure out how to quantized the tflite model, today this is not possible (Refer to the notebook for more informations)
  • Make the model work on TF 2.x