Skip to content

Latest commit

 

History

History
57 lines (45 loc) · 2.1 KB

README.md

File metadata and controls

57 lines (45 loc) · 2.1 KB

label-studio-clip-ml-backend

This project creates simple ML backend for label-studio that assists you in annotating a new dataset using CLIP, specifically OWL-ViT. As CLIP is trained on a dataset with annotated text captions, it is not needed to train any models like yolo, you just have to declare text classes, which is very convenient. For example ["a photo of a cow", "a photo of a chicken"]

demo.mp4

Clone repostory with submodules

git clone git@github.com:pavtiger/label-studio-clip-ml-backend.git --recursive

or just clone as usual and pull submodules with this command

git submodule update --init --recursive

Installation

It is suggested to use python venv for libraries installation

Activate venv

mkdir venv
python -m venv ./venv
source venv/bin/activate

Install requirements

pip install transformers  # CLIP
pip install -U -e label-studio-ml-backend  # install label studio backend
pip install redis rq  # additional libraries for the backend

Running backend

label-studio-ml init ml_backend --script ./main.py --force
label-studio-ml start ml_backend

The ML backend server becomes available at http://localhost:9090

You can also specify port for the webserver

label-studio-ml start ml_backend --port 8080 

Connecting to ML backend

Add an ML backend using the Label Studio UI

  • In the Label Studio UI, open the project that you want to use with your ML backend.
  • Click Settings > Machine Learning.
  • Click Add Model.
  • Type a Title for the model and provide the URL for the ML backend. For example, http://localhost:9090.
  • (Optional) Type a description.
  • (Optional) Select Use for interactive preannotation. See Get interactive pre-annotations for more.
  • Click Validate and Save.

Instructions to connect taken from label studio website