Skip to content

MumbaiHackathon/DeepDiagnosis

 
 

Repository files navigation

Deep Diagnosis

This is a research oriented project where we have tried to use machine learning models for classification tasks using images as an input. We've used CNN model for the task of image analysis and classification. The architecture of our model is a standard architecture used by many in the industry. The architecture is called Inception model.

The model architecture can be found here => Inception Model
We've trained the model on respective datasets and pickled them for easy use. Information regarding the datasets can be found in the References section.

Contents

Requirements

  • Python 3.X.X
  • Flask
  • TensorFlow

How to get it working?

The Core

  1. The core part of the project is simply a program that loads in the trained model and passes an image through it for classification.
  2. To run the core section simply clone the repo. copy the "predict.py" file from the root into your project folder.
  3. Download the trained model pickels from the following link => Trained Models
  4. Place the "trained_nets_Mumbai_hackathon" folder into you project directory.
  5. Now we simply fire up a Python shell or write the following python program.
  • Import the module
import predict.py as pred
  • Now we can simply call the below function to classify skin lesion image. (either benign or malignant)
# method for classifying skin lesions
# prints a string stating whether the lesion is melignant(Cancerous) or benign(non-cancerous) with a percent confidence
pred.classifySkinLesion(image)
  • Or we could classify the severity of retinopathy using a clinical image of patient's retina
# method for classifying diabetic retinopathy
# prints a string stating whether the image has a chance of diabetic ratinopathy (Normal, Moderate, Severe) with a percent confidence.
pred.classifyDiabeticRetinopathy(image);
  1. The images should be in the same folder as the "predict.py". Their file name should be passed as an argument to the function.

The Web App

  1. Running the web app is even simpler. Just place the trained_nets folder that we've downloaded in the above section and place it in the directory "Web/Backend/"
  2. In the above directory, you'll find a main.py file. That is the file that will launch our Flask app.
  3. Simply open a command line and type in :
python main.py
  1. The app wil start locally and the command line will show the URL to access the app.

Integration

  • We've integrated the trained model with a web app that we explored in the last section and an Android Applicaion.
  • The screens of the Android app is given below.
  • The source of the app can be found in the repo under folder "Android".

Screenshots (Android)

. .
 

Screenshots (Web App)

References

  • Download the trained model pickels from the following link => Trained Models
  • Want to train your own Inception model? Check out this link : Training Inception!
  • We used two datasets for training our models.
    • We used the ISIC dataset for skin lesion classification task. It is a set of 5 different datasets out of which we used only 4 and skipped the one names "ISIC_SONIC-1". We still had a very skewed data proportions (around 1500 images for benign category and only 600 images for malignant). So we went ahead and wrote a python script to flip all the images in the malignant folder to double the size and add new features.

      We recieved a 79% accuracy on this model.

    • The second dataset that we used was from this Kaggle competition. We used only 6000 images from this due to time constraints.

      This model gave us an accuracy of 68%.

  • Paper that inspired this project:

Authors

Liscense

This project is licensed under MIT License

About

Skin Lesion Classification using Deep Learning

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • JavaScript 60.8%
  • CSS 29.9%
  • HTML 3.6%
  • Java 3.2%
  • Python 2.5%