You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for bringing this issue to our attention. To better understand the problem, could you please let us know what device you are using to run the model? Our pytorch model uses mixed-precision, which means it utilizes both float32 and float16. However, we only use float16 on the GPU, which may result in some precision loss.
Sorry there is something not clear in my previous response. What I meant is that we use fp16 with onnx on gpu (which you may observe the difference), and fp32 with onnx on cpu.
Are you running in cpu mode or gpu mode? Could you please share the scripts/data you found the discrepancy?
We found that there is an error between the ONNX model and the pytorch(pth), with a cosine distance of approximately 5%
open_clip.create_model_and_transforms('ViT-B-32', pretrained='laion2b_e16', cache_dir)
reference:
https://github.com/LAION-AI/CLIP_benchmark/blob/main/clip_benchmark/models/open_clip.py
https://clip-as-service.s3.us-east-2.amazonaws.com/models-436c69702d61732d53657276696365/onnx/ViT-B-32-laion2b_e16/visual.onnx
The text was updated successfully, but these errors were encountered: