image-captioning
Here are 775 public repositories matching this topic...
Pytorch Implementation of NeuralTwinsTalk Presented @ IEEE HCCAI 2020.
-
Updated
Sep 26, 2022 - Python
We implement an encoder-decoder architecture using CNNs and sequential models to general image captions
-
Updated
May 12, 2021 - Jupyter Notebook
Image Captioning using Recurrent Neural Networks
-
Updated
Apr 5, 2024 - Jupyter Notebook
A deep learning neural network model used to generate caption for an image using CNN and RNN
-
Updated
Mar 26, 2021 - Jupyter Notebook
Project 2 of Udacity's Computer Vision Nanodegree
-
Updated
Jan 9, 2021 - Jupyter Notebook
Generate Caption by Uploading Random Image using Machine Learning
-
Updated
Sep 2, 2021 - Jupyter Notebook
Official pytorch implementation of paper "LG_MLFormer : local and global mlp for image captioning "
-
Updated
May 5, 2022
Image to Seq model, which generates sequence provided an image. Model is made using pretrained InceptionNet combined with LSTM
-
Updated
Jul 15, 2022 - Python
NTUA ECE Neural Networks Source Codes
-
Updated
Jul 8, 2022 - Jupyter Notebook
Automatic image captioning with Pytorch
-
Updated
Nov 23, 2022 - Jupyter Notebook
Final Project - Spring 2023 Big Data Technologies (CSP-554-03) (Neural Image Caption Generation with Visual Attention project)
-
Updated
May 1, 2023 - Jupyter Notebook
2023 GSA informatics, ImageCaptioning 박진재, 강현아, 서진현
-
Updated
Oct 16, 2023 - Jupyter Notebook
Koc University ELEC 491: Electrical and Electronic Engineering Design
-
Updated
Jan 13, 2024 - Jupyter Notebook
Image Classifier with kNN, SVM, MLP, CNN; Caption with RNN, Transformer; GAN on MNIST; Self-Supervised on unlabel data.
-
Updated
Jan 13, 2024 - Jupyter Notebook
An unofficial Torch implementation of J. Lu, C. Xiong, et al., Knowing when to Look: Adaptive Attention via a Visual Sentinel for Image Captioning, 2017 with deformable adaptive attention
-
Updated
Jul 24, 2023 - Jupyter Notebook
Image Captioning project for CSCI 585 (Computer Vision)
-
Updated
Dec 5, 2023 - Jupyter Notebook
[INLG2023] The High-Level (HL) dataset is a Vision and Language (V&L) resource aligning object-centric descriptions from COCO with high-level descriptions crowdsourced along 3 axes: scene, action, rationale.
-
Updated
Nov 13, 2023
My solutions to the class assignments
-
Updated
Sep 19, 2017 - Jupyter Notebook
Improve this page
Add a description, image, and links to the image-captioning topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the image-captioning topic, visit your repo's landing page and select "manage topics."