Skip to content

Latest commit

 

History

History
22 lines (13 loc) · 1.34 KB

README.md

File metadata and controls

22 lines (13 loc) · 1.34 KB

Minimal user-friendly demo of OpenAI's CLIP

Screenshot

OpenAI's CLIP is a deep learning model that can estimate the "similarity" of an image and a text. In this way, you can search images matching a natural language query even though your image corpus doesn't include titles, descriptions, keywords...

This repository includes a simple demo built on 25,000 Unsplash images and 7,685 movie images.

Usage

Click here to try the included notebook in Google Colab. Connect with your Google account and execute the first (and only) code cell.

Alternatively, an equivalent demo can be found as a Hugging Face's Space (and it works without an account).

Acknowledgements

Thanks to:

I was inspired by Unsplash Image Search from Vladimir Haltakov and Alph, The Sacred River from Travis Hoppe.