Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pytorch quantization bias is not quantised on aarch64 #1864

Closed
renato-arantes opened this issue Apr 11, 2024 · 4 comments
Closed

Pytorch quantization bias is not quantised on aarch64 #1864

renato-arantes opened this issue Apr 11, 2024 · 4 comments

Comments

@renato-arantes
Copy link
Contributor

renato-arantes commented Apr 11, 2024

Hi all,

I created a simple Pytorch image classification example that correctly classifies a sample image with 76% accuracy. I then applied static quantization to the model and it continued to correctly classify the sample with 73% accuracy and the size of the model, as expected, dropped from 102.5 MB to 25.6 MB after quantization. However, when analyzing the DNNL_VERBOSE output from the model run, I could see that the bias is f32, so it is NOT being quantized or converted to s32. Is there a special reason not to quantize the bias?

By inspect here I can see that the bias is by default f32 but in the documentation here the quantized bias is s32.

Here is a DNNL_VERBOSE output sample:

onednn_verbose,primitive,create:cache_hit,cpu,convolution,gemm_s8u8s32:ref,forward_inference,src_u8::blocked:acdb::f0 wei_s8:a:blocked:cdba::f0 bia_f32:a:blocked:a::f0 dst_u8::blocked:acdb::f0,attr-scratchpad:user attr-scales:src0:0+dst:0+wei:0 attr-zero-points:dst:0 ,alg:convolution_direct,mb1_ic512oc2048_ih7oh7kh1sh1dh0ph0_iw7ow7kw1sw1dw0pw0,0.00512695
onednn_verbose,primitive,create:cache_hit,cpu,reorder,jit:uni,undef,src_s8::blocked:cdba::f8:zpm1 dst_s8::blocked:cdba::f0,attr-scratchpad:user ,,2048x512x1x1,0.00683594
onednn_verbose,primitive,exec,cpu,reorder,jit:uni,undef,src_s8::blocked:cdba::f8:zpm1 dst_s8::blocked:cdba::f0,attr-scratchpad:user ,,2048x512x1x1,0.0759277
onednn_verbose,primitive,exec,cpu,convolution,gemm_s8u8s32:ref,forward_inference,src_u8::blocked:acdb::f0 wei_s8:a:blocked:cdba::f0 bia_f32:a:blocked:a::f0 dst_u8::blocked:acdb::f0,attr-scratchpad:user attr-scales:src0:0+dst:0+wei:0 attr-zero-points:dst:0 ,alg:convolution_direct,mb1_ic512oc2048_ih7oh7kh1sh1dh0ph0_iw7ow7kw1sw1dw0pw0,12.405
onednn_verbose,primitive,create:cache_miss,cpu,matmul,gemm:jit,undef,src_u8:a:blocked:ab::f0 wei_s8::blocked:ab::f0 bia_f32::blocked:ab::f0_mask2 dst_u8::blocked:ab::f0,attr-scratchpad:user attr-scales:src0:0+dst:0+wei:0 attr-zero-points:dst:0 ,,1x2048:2048x1000,0.0119629
onednn_verbose,primitive,exec,cpu,matmul,gemm:jit,undef,src_u8:a:blocked:ab::f0 wei_s8::blocked:ab::f0 bia_f32::blocked:ab::f0_mask2 dst_u8::blocked:ab::f0,attr-scratchpad:user attr-scales:src0:0+dst:0+wei:0 attr-zero-points:dst:0 ,,1x2048:2048x1000,8.24585
@shu1chen
Copy link
Contributor

shu1chen commented Apr 15, 2024

Hi @renato-arantes, do you mean quantized to s8 or s32? The accumulation datatype used during Op computation is governed by the accumulation_mode attribute of the primitive. By default, f32 is used for floating-point primitives (or f64 for f64 primitives) and s32 is used for integral primitives. You may change the default behavior by setting the dnnl::accumulation_mode to s32. More details are in the Data Types section of oneDNN developer guide.

The example cnn_inference_int8.cpp you mentioned shows the quantized bias is s8 because the data type of bias is set to s8 in line 114 auto conv_bias_md = memory::desc({conv_bias_tz}, dt::s8, tag::any);.
After modifying the example with auto conv_bias_md = memory::desc({conv_bias_tz}, dt::s32, tag::any);, It's then using s32 for bias, which can also be seen in the ONEDNN_VERBOSE log:

onednn_verbose,primitive,exec,cpu,reorder,jit:uni,undef,src_f32::blocked:abcd::f0 dst_u8::blocked:acdb::f0,attr-scales:dst:0:f32 ,,8x256x13x13,10.259
onednn_verbose,primitive,exec,cpu,reorder,jit:uni,undef,src_f32::blocked:abcd::f0 dst_s8::blocked:AcdB64a4b::f0,attr-scales:dst:0:f32 ,,384x256x3x3,1.24707
onednn_verbose,primitive,exec,cpu,convolution,brgconv:avx512_core_vnni,forward_training,src_u8:a:blocked:acdb::f0 wei_s8:a:blocked:AcdB64a4b::f0 bia_s32:a:blocked:a::f0 dst_u8:a:blocked:acdb::f0,attr-scales:src0:0:f32+dst:0:f32+wei:0:f32 attr-post-ops:eltwise_relu ,alg:convolution_direct,mb8_ic256oc384_ih13oh13kh3sh1dh0ph1_iw13ow13kw3sw1dw0pw1,0.651855
onednn_verbose,primitive,exec,cpu,reorder,jit:uni,undef,src_u8::blocked:acdb::f0 dst_f32::blocked:abcd::f0,attr-scales:src0:0:f32 ,,8x384x13x13,0.10498

@renato-arantes
Copy link
Contributor Author

renato-arantes commented Apr 15, 2024

Hi @shu1chen,

Your answer is not related to my question that is about Pytorch, and not an example that you said I mentioned, but I did not. Maybe you are answering another question here by mistake?

Cheers,
Renato

@shu1chen
Copy link
Contributor

shu1chen commented Apr 15, 2024

By inspect here I can see that the bias is by default f32 but in the documentation here the quantized bias is s32.

Hi @renato-arantes, the second here is the same example in the source code as I referred.
I meant that perhaps you need to set accumulation_mode attribute of the primitive in Pytorch to change the default behavior.

@mgouicem
Copy link
Contributor

By inspect here I can see that the bias is by default f32 but in the documentation here the quantized bias is s32.

@renato-arantes From oneDNN perspective, the datatype of the bias is user defined. Internally, it can be upconverted (first link).
Regarding the documentation you linked, this is just a tutorial to showcase how quantization workflow can be customized by oneDNN user (and that example uses signed int8 datatype for bias).

Now why would PyTorch not quantize the bias? This is a question to ask PyTorch maintainers, but in general, there is little reason to quantize the bias tensor as it is small when compared to layer weights and activations. Adding @milpuz01, @snadampal @malfet for more comments.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants