🏄 Scalable embedding, reasoning, ranking for images and sentences with CLIP
-
Updated
Jan 23, 2024 - Python
🏄 Scalable embedding, reasoning, ranking for images and sentences with CLIP
X-modaler is a versatile and high-performance codebase for cross-modal analytics(e.g., image captioning, video captioning, vision-language pre-training, visual question answering, visual commonsense reasoning, and cross-modal retrieval).
TOMM2020 Dual-Path Convolutional Image-Text Embedding 🐾 https://arxiv.org/abs/1711.05535
The Paper List of Large Multi-Modality Model, Parameter-Efficient Finetuning, Vision-Language Pretraining, Conventional Image-Text Matching for Preliminary Insight.
[AAAI2021] The code of “Similarity Reasoning and Filtration for Image-Text Matching”
PyTorch code for BagFormer: Better Cross-Modal Retrieval via bag-wise interaction
Deep Supervised Cross-modal Retrieval (CVPR 2019, PyTorch Code)
Polysemous Visual-Semantic Embedding for Cross-Modal Retrieval (CVPR 2019)
Code for "Learning the Best Pooling Strategy for Visual Semantic Embedding", CVPR 2021
Official Pytorch implementation of "Probabilistic Cross-Modal Embedding" (CVPR 2021)
[CVPR 2020, Oral] "Sketch Less for More: On-the-Fly Fine-Grained Sketch Based Image Retrieval”, IEEE Conf. on Computer Vision and Pattern Recognition (CVPR), 2020. .
Scalable deep multimodal learning for cross-modal retrieval (SIGIR 2019, PyTorch Code)
Knowledge Graphs Meet Multi-Modal Learning: A Comprehensive Survey
Offline semantic Text-to-Image and Image-to-Image search on Android powered by quantized state-of-the-art vision-language pretrained CLIP model and ONNX Runtime inference engine
Source code for paper "Adversary Guided Asymmetric Hashing for Cross-Modal Retrieval".
Learning Cross-Modal Retrieval with Noisy Labels (CVPR 2021, PyTorch Code)
Unsupervised Contrastive Cross-modal Hashing (IEEE TPAMI 2023, PyTorch Code)
Official implementation of "Contrastive Audio-Language Learning for Music" (ISMIR 2022)
[NeurIPS 2022 Spotlight] Expectation-Maximization Contrastive Learning for Compact Video-and-Language Representations
Preserving Semantic Neighborhoods for Robust Cross-modal Retrieval [ECCV 2020]
Add a description, image, and links to the cross-modal-retrieval topic page so that developers can more easily learn about it.
To associate your repository with the cross-modal-retrieval topic, visit your repo's landing page and select "manage topics."