Neural Image Caption Generators
-
Updated
Aug 4, 2019
Neural Image Caption Generators
image captioning model pytorch implementation
Show, Attend, and Tell. Modified to use on UIT-ViIC dataset.
Encoder-Decoder CNN-LSTM Model with an attention mechanism for image captioning. Trained using the Microsoft COCO Dataset.
A Keras implementation of the "Show, Attend and Tell" paper
Mixed vision-language Attention Model that gets better by making mistakes
Persian implementation of "Show, Attend and Tell: Neural Image Caption Generation with Visual Attention"
TensorFlow Implementation of paper: "Show, Attend and Tell"
PyTorch re-implementation of some papers on image captioning | 图像描述
Model base on Kevin Xu Show, Attend and Tell: Neural Image Caption Generation with Visual Attention (https://arxiv.org/pdf/1502.03044.pdf)
An image captioning model that is inspired by the Show, Attend and Tell paper (https://arxiv.org/abs/1502.03044) and the Sequence Generative Adversarial Network paper (https://arxiv.org/abs/1609.05473)
Caption generator for live camera feed
An implementation of the Show, Attend and Tell paper in Tensorflow, for the OpenAI Im2LaTeX suggested problem
Implemented image caption generation method propossed in Show, Attend, and Tell paper using the Fastai framework to describe the content of images. Achieved 24 BLEU score for Beam search size of 5. Designed a Web application for model deployment using the Flask framework.
CaptionBot : Sequence to Sequence Modelling where Encoder is CNN(Resnet-50) and Decoder is LSTMCell with soft attention mechanism
Keras implementation of the "Show, Attend and Tell" paper
A PyTorch implementation of the paper Show, Attend and Tell: Neural Image Caption Generation with Visual Attention
Add a description, image, and links to the show-attend-and-tell topic page so that developers can more easily learn about it.
To associate your repository with the show-attend-and-tell topic, visit your repo's landing page and select "manage topics."