Skip to content

akabago/ZeroCostDL4Mic-VirtualMultiplexing

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

39 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ZeroCostDL4Mic_VirtualMultiplexing

Overview

This repository contains the code as well as the tool generated for the project 'User-friendly deep learning-based morphometric unmixing of multiplex 3D imaging data'.

It contains:

  • ZeroCostDL4Mic-VirtualMultiplexing.
  • Code to generate training and testing data from CZI format image dataset.
  • Deep Learning training and test examples.

ZeroCostDL4Mic-VirtualMultiplexing is a tool for generate and use deep learning models for signal unmixing in fluorescent microscopy imaging. It is set in a user friendly interface so anyone can use it, for that it is implemented in Google Colab.

In the tool, for generating data from a CZI file is used the DataGenerator.py, and the Deep Learning approach used for virtual multiplexing is based on pix2pix.

How to use ZeroCostDL4Mic notebook?

ZeroCostDL4Mic-VirtualMultiplexing notebook can be directly opened from GitHub into Colab by clicking on the link. It is mandatory for use it to create a local copy to your Google Drive. Once you have a copy of it follow the intructions present on the notebook. It is necesary to use this tool to have a Google Drive account for having a copy of the notebook as well as for uploading the files that you are going to use.

Access here to the tool: ZeroCostDL4Mic-VirtualMultiplexing

User general workflow

Once you have copied the notebook in your Google Drive account, you can follow the general pipeline implemented shown below:

What steps do you need to follow in the notebook?

If you don't know which steps to follow in the noebook, answer the questions present on the decission tree below to see which steps you need to follow.

Want to see a short demonstration?

Click here to access to a short video demonstration of the tool.

Models

There were trained 3 different pix2pix models for virtual multiplexing with the same hyperparameter configuration and the same CZI file. They differ on the approach followed for generate the mixed signal data from the CZI file.

Model 1

In this model the mixed signal data comes from the third channel of the CZI file, which is an open detector channel that captures signals of channel number 1 (CDH1) and number 2 (KI67).

Model 2

In this model the mixed signal comes from a synthetic approach in which the mixed signal is generated computationally by the aqual proportional combination between the signal of channel 1 (CDH1) and channel 2 (KI67).

Model 3

In this model the mixed signal comes from a weighted approach in which the mixed signal is generated computationally by the combination between the signal of channel 1 (CDH1), with a contribution of 40% and channel 2 (KI67), with a contribution of 60%.

Predictions on real data

The trained models have been used for virtual multiplexing in a real dataset which contains mixed signal of CDH1 and KI67 markers.

Remarks

The datasets, models and output generated are not incluided in this repository since they too large for GitHub, to access them they are added as links to Google Drive.