Skip to content

LukasMosser/neural_rock_typing

Repository files navigation

Neural Rock

Do Machines See Rocks Like Geologists Do?

Authors

Gregor Baechle, George Ghon, Lukas Mosser
Carbonate Complexities Group, 2020

Introduction

This project aims to investigate the ability of neural networks to classify carbonate rocks from thin-sections for various carbonate classification schemes. More importantly, we seek to understand whether neural networks use similar visual and textural features to classify each image.

To investigate this we use the Gradient Class Activation Maps (GradCAM) Distill.Pub Article to show highlighting features where a neural network is "looking" in an image to make its decision on which carbonate class to predict.

These class activation maps are dependent on the architecture and weights of a model and we therefore provide here pre-trained models and code infrastructure to train various convolutional networks and a viewer application to visualize the CAM maps and the predictions of each network.

Due to the extremely small dataset of ~80 images, a transfer learning approach was used to train ImageNet pretrained models. Because each image has dimensions of > 3000 x 2000 pixels, we randomly extract patches at 224x224 pixels and apply feature preserving data augmentation to regularize model training and to (hopefully) prevent overfitting. Regardless, results should be evaluated on the test-splits of the datasets, indicated in the viewer app for each model.

Network types

We provide pretrained ResNet18 and VGG11 models that either use ImageNet pretrained activations in the feature extractor or have been fine-tuned by training of the feature extractor with a very small learning rate.

Neural Rock Application

We provide a viewer application that allows inspection and visualization of the results. To run the application first, install Docker, and Docker-Compose.

Once finished start the application by calling:

docker-compose up -d 

and navigating to the viewer at localhost/viewer.

You should be greeted by the following interface:

Viewer

Here you can switch between different carbonate classification schemes (Labselset Name), different CNN architectures (Model Selector), whether to use a frozen or a trained feature extractor (Frozen Selector), and the network layer to visualize for CAM maps (Network Layer Number). You can select the class you want to activate the network (Class Name), and finally a selection of all the images in the dataset with an indication on whether they were used in the training set, or not, as well as their ground-truth label, as identified by a carbonate geologist.

A histogram of the predictions for the network is given below.

Once you have finished working with the application you can shut-down the Docker container:

docker-compose down

If you wish to inspect the logs while the application is running run in a terminal:

docker-compose logs -t -f

which will show you a running log of the application status while you work with it.

The viewer builds on Panel, Holoviews, and Bokeh

Docker Images are hosted on Dockerhub.

API Specification

Interested viewers can also access the api that runs behind the scenes to serve model predictions. Navigate in your browser to http://localhost:8000/docs once the app is running to see the OpenAPI specification. The API is built on FastAPI

Deploying on AWS

For sharing we have used Hashicorp Terraform to provide an Infrastructure as Code that will deploy a worker on AWS EC2. This allows us to reduce manual work for spinning-up a machine that serves the model.
Integrating an Ansible Playbook could be considered future work.

Model Training

To load a notebook for training a model in Google Colab, follow this link:
Open In Colab

Update May 2021: There seems to be an issue with Colab crashing due to incompatibility with Pytorch Lightning Training of individual models can also be performed via train/train.py

We have made use of Weights And Biases to organize all our ML experiments. The dashboard for all model training runs executed as a sweep can be found here.

Weights&Biases Dashboard

To make training on google colab efficient we preload the entire dataset onto the GPU as to keep hard-disk and cloud storage latency to a minimum.

Dataset and Weights

The dataset and model weights will be released in the coming weeks, but are included in the docker-images, so you are ready to run if you wish to play with the application locally.

Dataset Augmentation

We make use of heavy dataset augmentation to ensure the network focuses on textural features. We therefore perform colorspace jittering in HSV space as a data-augmentation. Here a batch of images as the network sees them at training time: Training Images

During prediction time we only crop and resize the images, but do not perform any color-jittering as to preserve the image dataset color distribution. Here a batch of images as seen during the validation step: Validation Images

Future Work

Some initial testing has been done to incorporate Captum to provide other model interpretability methods for CNNs, but there is no time left in the project to implement this currently.

In terms of deployment, there is much room for improvement as the application does not "scale" currently.
The process of building a well-scaling application that builds on machine learning beyond the How to deploy your sklearn model on AWS Lambda Tutorial is a non-trivial task especially if you can't make use of good inference libraries that take care of a lot of that work for you. In our case, RAM requirements are quite high due to the need for backpropagation to obtain the CAM maps. That puts special burden on deployment infrastructure.
Nevertheless, one could design a better system to scale out the API using AWS ECS or similar approaches, maybe even Lambda type functions.
Definitely something to learn for the future :)

Unit and Integration Tests: The cake was a lie...

Data Acknowledgment

This research used samples and data provided by the following dissertation:

Baechle, Gregor (2009): "Effects of pore structure on velocity and permeability in carbonate rocks" dissertation at Mathematisch-Naturwissenschaftliche Fakultät of Eberhard Karls University Tuebingen. http://hdl.handle.net/10900/49698

The image data for the PhD has been acquired while conducting research at the University of Miami, Comparative Sedimentology Laboratory.

Credit and Thanks

If you find this useful feel free to credit where appropriate.
A detailed publication on the outcomes of our findings is in the works.

We also wish to thank the organizers of the Full Stack Deep Learning Course for an excellent programme and for providing an incentive to create and share this work.

Libraries and Articles that have contributed to this repository:

Full Stack Deep Learning, pytorch-grad-cam by JacobGil, Terraform with Nana, Distill.Pub Pytorch, Google Colab, Pytorch Lightning, Weights And Biases, Captum, Panel, Holoviews, Bokeh,

About

Can machines help us understand how to distinguish rock types?

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published