You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
-- >>>>>>>>>>>>>
-- MNN BUILD INFO:
-- System: Linux
-- Processor: aarch64
-- Version: 2.8.4
-- Metal: OFF
-- OpenCL: OFF
-- OpenGL: OFF
-- Vulkan: OFF
-- ARM82: ON
-- oneDNN: OFF
-- TensorRT: OFF
-- CoreML: OFF
-- NNAPI: OFF
-- CUDA: OFF
-- OpenMP: ON
-- BF16: OFF
-- ThreadPool: OFF
-- Hidden: TRUE
-- Build Path: /home/zhenjing/MNN-2.8.4/build
-- CUDA PROFILE: OFF
-- WIN_USE_ASM:
-- Enabling AArch64 Assemblies
-- Enable INT8 SDOT
-- [*] Checking OpenMP
-- Found OpenMP_C: -fopenmp
-- Found OpenMP_CXX: -fopenmp
-- Found OpenMP: TRUE
-- Configuring done
-- Generating done
Github版本: 2.8.4
./benchmark.out models/ 50 0 0 1 2 0 1 1
MNN benchmark
Forward type: CPU thread=1 precision=2 sparsity=0 sparseBlockOC=1 testQuantizedModel=1
--------> Benchmarking... loop = 50, warmup = 0
[-INFO-]: precision=2, use fp16 inference if your device supports and open MNN_ARM82=ON.
[-INFO-]: Auto set sparsity=0 when test quantized model in benchmark...
Auto set sparsity=0 when test quantized model in benchmark...
The device support i8sdot:1, support fp16:1, support i8mm: 1
[ - ] MobileNetV2_224.mnn max = 14.354 ms min = 13.765 ms avg = 13.997 ms
Illegal instruction (core dumped)
The text was updated successfully, but these errors were encountered:
./benchmark.out models/ 50 0 0 1 0 0 1 1
MNN benchmark
Forward type: CPU thread=1 precision=0 sparsity=0 sparseBlockOC=1 testQuantizedModel=1
--------> Benchmarking... loop = 50, warmup = 0
[-INFO-]: precision!=2, use fp32 inference.
[-INFO-]: Auto set sparsity=0 when test quantized model in benchmark...
Auto set sparsity=0 when test quantized model in benchmark...
The device support i8sdot:1, support fp16:1, support i8mm: 1
[ - ] MobileNetV2_224.mnn max = 29.129 ms min = 28.343 ms avg = 28.640 ms
Illegal instruction (core dumped)
平台(如果交叉编译请再附上交叉编译目标平台):
-- >>>>>>>>>>>>>
-- MNN BUILD INFO:
-- System: Linux
-- Processor: aarch64
-- Version: 2.8.4
-- Metal: OFF
-- OpenCL: OFF
-- OpenGL: OFF
-- Vulkan: OFF
-- ARM82: ON
-- oneDNN: OFF
-- TensorRT: OFF
-- CoreML: OFF
-- NNAPI: OFF
-- CUDA: OFF
-- OpenMP: ON
-- BF16: OFF
-- ThreadPool: OFF
-- Hidden: TRUE
-- Build Path: /home/zhenjing/MNN-2.8.4/build
-- CUDA PROFILE: OFF
-- WIN_USE_ASM:
-- Enabling AArch64 Assemblies
-- Enable INT8 SDOT
-- [*] Checking OpenMP
-- Found OpenMP_C: -fopenmp
-- Found OpenMP_CXX: -fopenmp
-- Found OpenMP: TRUE
-- Configuring done
-- Generating done
Github版本: 2.8.4
./benchmark.out models/ 50 0 0 1 2 0 1 1
MNN benchmark
Forward type: CPU thread=1 precision=2 sparsity=0 sparseBlockOC=1 testQuantizedModel=1
--------> Benchmarking... loop = 50, warmup = 0
[-INFO-]: precision=2, use fp16 inference if your device supports and open MNN_ARM82=ON.
[-INFO-]: Auto set sparsity=0 when test quantized model in benchmark...
Auto set sparsity=0 when test quantized model in benchmark...
The device support i8sdot:1, support fp16:1, support i8mm: 1
[ - ] MobileNetV2_224.mnn max = 14.354 ms min = 13.765 ms avg = 13.997 ms
Illegal instruction (core dumped)
The text was updated successfully, but these errors were encountered: