You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I meet bug when export model TrOCR to ONNX dtype bf16 (support with optimum 1.17.0): INVALID_GRAPH : Load model from output/encoder_model.onnx failed:This is an invalid model. Type Error: Type 'tensor(bfloat16)' of input parameter (pixel_values) of operator (Conv) in node (/embeddings/patch_embeddings/projection/Conv) is invalid.
Mr @michaelbenayoun Hope you can consider and support me.
Beside I want ask about plan support export ONNX dtype float8 in the future. I see quanto support quantize model to float8
Information
The official example scripts
My own modified scripts
Tasks
An officially supported task in the examples folder (such as GLUE/SQuAD, ...)
System Info
Who can help?
I meet bug when export model TrOCR to ONNX dtype bf16 (support with optimum 1.17.0): INVALID_GRAPH : Load model from output/encoder_model.onnx failed:This is an invalid model. Type Error: Type 'tensor(bfloat16)' of input parameter (pixel_values) of operator (Conv) in node (/embeddings/patch_embeddings/projection/Conv) is invalid.
Mr @michaelbenayoun Hope you can consider and support me.
Beside I want ask about plan support export ONNX dtype float8 in the future. I see quanto support quantize model to float8
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction (minimal, reproducible, runnable)
Expected behavior
successfully export BF16 and plan about dtype FP8
The text was updated successfully, but these errors were encountered: