Colorising black and white photos using deep learning in pytorch. This method follows the logic from the paper by Baldassarre et al.
The model consists of an encoder and decoder. Additionally the image is fed into inception-resnet-v2 that gives information on what can be found inside the image (ie the sea, a cat, grass etc.). The output from the inception model is fused with that of the encoder, feeding the decoder with more information for colorising appropriately.
In the images folder, you will find some images of cats and dogs that the model can be trained with. I have had varying successs with this model unfortunately, often the results are brown, though there are some occassional goodies. Here are a few of the better ones:
Perhaps a larger dataset would work better with more varied images. If you get any better results with the model, I would be eager to hear how!