Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[onert] Quantization type kernel for transformer #12942

Open
9 of 50 tasks
hseok-oh opened this issue Apr 30, 2024 · 1 comment
Open
9 of 50 tasks

[onert] Quantization type kernel for transformer #12942

hseok-oh opened this issue Apr 30, 2024 · 1 comment
Labels
area/onert ONE runtime

Comments

@hseok-oh
Copy link
Contributor

hseok-oh commented Apr 30, 2024

Below is required I/O quantization type (uint8/uint16) kernel for quantized transformer model

  • MUL
    • UINT8
    • INT16
  • ADD
    • UINT8
    • INT16
  • RSQRT
    • UINT8
    • INT16
  • DIV
    • UINT8
    • INT16
  • RESHAPE (same I/O quant param)
  • TRANSPOSE (same I/O quant param)
    • UINT8
    • INT16
  • STRIDED_SLICE (same I/O quant param)
    • UINT8
    • INT16
  • NEG
    • UINT8
    • INT16
  • CONCATENATION
    • UINT8
    • INT16
  • BATCH_MATMUL
    • UINT8
    • INT16
  • SOFTMAX
    • UINT8
    • INT16
  • LOGISTIC
    • UINT8
    • INT16
  • GATHER (indices: int32/int64)
    • UINT8
    • INT16
  • MEAN
    • UINT8
    • INT16
  • SQRT
    • UINT8
    • INT16

Quantization type change

  • QUANTIZE
    • UINT8 -> INT16
    • INT16 -> UINT8

I/O and weight quantization type for transformer model

@hseok-oh hseok-oh added the area/onert ONE runtime label Apr 30, 2024
@hseok-oh
Copy link
Contributor Author

hseok-oh commented May 7, 2024

Updated: QUANTIZE operator

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/onert ONE runtime
Projects
None yet
Development

No branches or pull requests

1 participant