Skip to content

Latest commit

 

History

History

japanese-stable-clip-vit-l-16

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 

Japanese Stable CLIP ViT-L/16

Input

Input

(Image from https://images.pexels.com/photos/2253275/pexels-photo-2253275.jpeg)

Output

class_count=3
+ idx=0
  category=0[犬 ]
  prob=1.0
+ idx=1
  category=2[象 ]
  prob=0.0
+ idx=2
  category=1[猫 ]
  prob=0.0

Requirements

This model requires additional module.

pip3 install transformers

Usage

Automatically downloads the onnx and prototxt files on the first run. It is necessary to be connected to the Internet while downloading.

For the sample image,

$ python3 japanese-stable-clip-vit-l-16.py

If you want to specify the input image, put the image path after the --input option.

$ python3 japanese-stable-clip-vit-l-16.py --input IMAGE_PATH

You can use --text option if you want to specify a subset of the texture labels to input into the model.
Default labels is "犬", "猫" and "象".

$ python3 japanese-stable-clip-vit-l-16.py --text "" --text "" --text ""

Reference

Framework

Pytorch

Model Format

ONNX opset=11

Netron

CLIP-ViT-L16-image.onnx.prototxt
CLIP-ViT-L16-text.onnx.prototxt