🏄 Scalable embedding, reasoning, ranking for images and sentences with CLIP
-
Updated
Jan 23, 2024 - Python
🏄 Scalable embedding, reasoning, ranking for images and sentences with CLIP
Run OpenAI's CLIP model on iOS to search photos.
Simple implementation of OpenAI CLIP model in PyTorch.
视觉UI分析工具
根据文本描述搜索本地图片的工具,powered by Rust + candle + CLIP
[NeurIPS 2023 Oral] Quilt-1M: One Million Image-Text Pairs for Histopathology.
[ICLR2023] PLOT: Prompt Learning with Optimal Transport for Vision-Language Models
Semantic Search demo featuring UForm, USearch, UCall, and StreamLit, to visual and retrieve from image datasets, similar to "CLIP Retrieval"
[ICCV2023] Tem-adapter: Adapting Image-Text Pretraining for Video Question Answer
The most impactful papers related to contrastive pretraining for multimodal models!
Semantic Emoji Search Plugin for FiftyOne
OpenAI's CLIP neural network
A Light weight deep learning model with with a web application to answer image-based questions with a non-generative approach for the VizWiz grand challenge 2023 by carefully curating the answer vocabulary and adding linear layer on top of Open AI's CLIP model as image and text encoder
[ NeurIPS 2023 R0-FoMo Workshop ] Official Codebase for "Estimating Uncertainty in Multimodal Foundation Models using Public Internet Data"
WORK IN PROGRESS | Text to image search & Image Similarity Search using @typesense
Traverse the space of concepts with a multi-modal similarity index in FiftyOne
Реализация система извлечения изображений по текстовому описанию и поиск похожих фотографий
This repository contains research work on Adversarial Robustness Analysis for Deep Models.
Flask app to perform image search using semantic matching of input text and images
Add a description, image, and links to the clip-model topic page so that developers can more easily learn about it.
To associate your repository with the clip-model topic, visit your repo's landing page and select "manage topics."