Skip to content

Training a CNN to recognize the current Go position with photorealistic renders

License

Notifications You must be signed in to change notification settings

LuanAdemi/VisualGo

Repository files navigation

VisualGo

MIT license Binder Open In Kaggle forthebadge

VisualGo is a toolset of different machine learning algorithms to extract the current go board state from an image. It features two models, which first find the go board in the given image and then predict the current state. The basic pipeline of the whole model looks like the following:

Models

As seen in the figure above, the model is divided into two sub-models and a handcrafted transformer, which performs a perspective warp and threasholding on the images using the predicted masks.

The Segmentation Model is a basic UNet architecture [1] trained on ~800 rendered images of go boards. They are resized to 128x128 pixels and feeded into the network in batches of 16 images. It is a basic binary segmentation problem, hence the performance of the model is pretty good.

The State Prediction Model is a residual tower trained on the transformed images from the Segmentation Model.

Files

This repository contains a set of notebooks explaining every model in depth and analysing their perfomance using Captum.

Here is a basic table of contents:

  • EDA: Exploratory Data Analysis of the VisualGo Dataset
  • Segmentation: Exploring the Segmentation model architecture and investigating the model quality using Captum
  • MaskTransformer: Explaining the Transformer
  • Position: Exploring the Position model architecture and investigating the model quality using Captum

I highly recommend checking them out with the binder link I set up.

Dataset

As already mentioned, the images are actually photorealistic renders of random go boards with randomized materials, camera positions and lighting (a deeper insight on how the data was generated is given in the EDA notebook).

You can find the dataset in its final form on kaggle.

It was rendered using blender and the following script.

Inspiration

References

  • [1] U-Net: Convolutional Networks for Biomedical Image Segmentation, Olaf Ronneberger, Philipp Fischer, Thomas Brox
    arXiv
  • [2] Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization, Ramprasaath R. Selvaraju, Michael Cogswell, et al. arXiv

Techstack

Made withJupyter

About

Training a CNN to recognize the current Go position with photorealistic renders

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published