PyTorch implementation for Self-supervised Modal and View Invariant Feature Learning
-
Updated
Jul 5, 2020
PyTorch implementation for Self-supervised Modal and View Invariant Feature Learning
Fachpraktikum project for Human-computer interaction course
ViTAA: Visual-Textual Attributes Alignment in Person Search by Natural Language
VNEL(Visual Named Entity Linking) is a brand-new task that accepts the pure image and processes entity linking on it, which focus on CBIR, Cross-modal retrieve, and Multimodal fusion.
Flask Web App for ES-654 Machine Learning course project
Image-Text Matching Model Zoo
The Unified Code of Image-Text Retrieval for Further Exploration.
An attempt to transfer sentence to image style.
Course project for 198:536 at Rutgers University. The project is about cross-modal retrieval of food recipes given the images and recipe ingredients and instructions of the recipe, using the Recipe1M dataset.
The code for the paper "GMMFormer: Gaussian-Mixture-Model Based Transformer for Efficient Partially Relevant Video Retrieval" (AAAI'24)
PyTorch code for cross-modal-retrieval on Flickr8k/30k using Bert and EfficientNet
An intentionally simple Image to Food cross-modal search. Created by Prithiviraj Damodaran.
Tensorflow implementation of UDIH
Code for the paper "Sentiment-Oriented Metric Learning for Text-to-Image Retrieval", ECIR'21
This repository contains the code for the paper "Extending CLIP for Category-to-image Retrieval in E-commerce" published at ECIR 2022.
Implementation of "VSE++: Improving Visual-Semantic Embeddings with Hard Negatives" in Tensorflow.
This repository contains the code for the paper "Object-centric vs. Scene-centric Image-Text Cross-modal Retrieval: A Reproducibility Study" published at ECIR 2023.
[TIP2024] The code of “Deep Boosting Learning: A Brand-new Cooperative Approach for Image-Text Matching”
PyTorch code for the paper "Complementarity is the king: A multi-modal and multi-grained hierarchical semantic enhancement network for cross-modal retrieval"
Add a description, image, and links to the cross-modal-retrieval topic page so that developers can more easily learn about it.
To associate your repository with the cross-modal-retrieval topic, visit your repo's landing page and select "manage topics."