Skip to content

deruhat/NIvsCG-keras

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

97 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

NIvsCG

Distinguishing Between Natural and Computer-Generated Images Using Convolutional Neural Networks



Preparing the Workspace

Make sure you have the project structured as follows:

├── checkpoints
├── logs
├── models
├── results
├── src
   ├── model.py
   ├── voting.py
   ├── patchesTestAcc.py
├── utils
   ├── mps
   ├── imageMpsCrop.m
   ├── makePatches.m
   └── imageNamesFileMaker.py
└── datasets
   ├── full
       ├── personal
           ├── 000001.jpg 
           └── ...
       └── prcg
           ├── 000001.jpg 
           └── ...
   └── patches
       ├── train
           ├── personal
              ├── patch-001.bmp 
              └── ...
           └── prcg
              ├── patch-001.bmp 
              └── ...
       ├── valid
           ├── personal
              ├── patch-001.bmp 
              └── ...
           └── prcg
              ├── patch-001.bmp 
              └── ...
       ├── test
           ├── personal
              ├── patch-001.bmp 
              └── ...
           └── prcg
              ├── patch-001.bmp 
              └── ...
       └── test-majority-voting
           ├── all
              ├── patch-001.bmp 
              └── ...
           └── filenames.txt

Dataset

We used the Personal and PRCG datasets for our two classes. Images of each class were split into 3:1:1 ratio (train:valid:test). And it was taken into account each image's category. For example, PRCG images fall into multiple categories (Archticture, nature, object, etc..) We split the images so each dataset (train, valid, test) has images from every category. Then made 200 crops of each image using the MPS algorithm to get the patches. All of this is done using the makePatches.m script.

  • Google and PRCG datasets can be downloaded here.
  • Personal dataset can be downloaded here.

Training the Model

The code for the CNN design described by the paper can be found in model.py. Image patches used as training and validation data have to be cropped using the MPS algorithm implemented here.

Majority Voting

The code for the majority voting algorithm is in voting.py. A trained .h5 model from model.py is needed in order to run the majority voting algorithm and get the test accuracy.

Results

So far we're able to achieve 95% 96.88% 97.18% accuracy on image classification. We take 200 patches that cover the whole image from each test sample using the MPS algorithm, and take the majority vote of these 200 patches to decide that test image's class.

You can download the trained model here.

Contributers

Abdulellah Abualshour

Abdulmajeed Aljaloud

References

About

Distinguishing Between Natural and Computer-Generated Images Using Convolutional Neural Networks in Keras.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published