We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Describe the bug Trying to run custom Object classification model trained @ https://maixhub.com/ModelTraining
To Reproduce When running the provided boot.py an error in the output of:
img = sensor.snapshot() fmap = kpu.forward(task, img) plist=fmap[:] print(plist)
Expected behavior plist should be 2 floats
Actual behaviour plist is = ('//./////, //./////')
Screenshots
MicroPython v0.5.0-98-g7ec09ea22-dirty on 2020-07-21; Sipeed_M1 with kendryte-k210 Type "help()" for more information. init i2c2 [MAIXPY]: find ov7740 ###free gc heap memory : 87 KB ###free sys heap memory: 280 KB layer[0]: KL_K210_CONV, 1364 bytes layer[1]: KL_K210_CONV, 1024 bytes layer[2]: KL_K210_CONV, 2048 bytes layer[3]: KL_K210_CONV, 1280 bytes layer[4]: KL_K210_CONV, 5888 bytes layer[5]: KL_K210_CONV, 2048 bytes layer[6]: KL_K210_CONV, 10496 bytes layer[7]: KL_K210_CONV, 2048 bytes layer[8]: KL_K210_CONV, 20480 bytes layer[9]: KL_K210_CONV, 3840 bytes layer[10]: KL_K210_CONV, 38912 bytes layer[11]: KL_K210_CONV, 3840 bytes layer[12]: KL_K210_CONV, 77312 bytes layer[13]: KL_K210_CONV, 6912 bytes layer[14]: KL_K210_CONV, 151040 bytes layer[15]: KL_K210_CONV, 6912 bytes layer[16]: KL_K210_CONV, 151040 bytes layer[17]: KL_K210_CONV, 6912 bytes layer[18]: KL_K210_CONV, 151040 bytes layer[19]: KL_K210_CONV, 6912 bytes layer[20]: KL_K210_CONV, 151040 bytes layer[21]: KL_K210_CONV, 6912 bytes layer[22]: KL_K210_CONV, 151040 bytes layer[23]: KL_K210_CONV, 6912 bytes layer[24]: KL_K210_CONV, 301568 bytes layer[25]: KL_K210_CONV, 13568 bytes layer[26]: KL_K210_CONV, 596480 bytes layer[27]: KL_DEQUANTIZE, 24 bytes layer[28]: KL_GLOBAL_AVERAGE_POOL2D, 24 bytes layer[29]: KL_QUANTIZE, 24 bytes layer[30]: KL_K210_ADD_PADDING, 16 bytes layer[31]: KL_K210_CONV, 1960 bytes layer[32]: KL_K210_REMOVE_PADDING, 16 bytes layer[33]: KL_DEQUANTIZE, 24 bytes layer[34]: KL_SOFTMAX, 16 bytes None [{"index":0, "type":KL_K210_CONV, "wi":224, "hi":224, "wo":112, "ho":112, "chi":3, "cho":24, "dw":0, "kernel_type":1, "pool_type":5, "para_size":648}, {"index":1, "type":KL_K210_CONV, "wi":112, "hi":112, "wo":112, "ho":112, "chi":24, "cho":24, "dw":1, "kernel_type":1, "pool_type":0, "para_size":216}, {"index":2, "type":KL_K210_CONV, "wi":112, "hi":112, "wo":112, "ho":112, "chi":24, "cho":48, "dw":0, "kernel_type":0, "pool_type":0, "para_size":1152}, {"index":3, "type":KL_K210_CONV, "wi":112, "hi":112, "wo":112, "ho":112, "chi":48, "cho":48, "dw":1, "kernel_type":1, "pool_type":0, "para_size":432}, {"index":4, "type"(//./////, //./////) (//./////, //./////)
Please complete the following information
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Describe the bug
Trying to run custom Object classification model trained @ https://maixhub.com/ModelTraining
To Reproduce
When running the provided boot.py an error in the output of:
Expected behavior
plist should be 2 floats
Actual behaviour
plist is = ('//./////, //./////')
Screenshots
Please complete the following information
The text was updated successfully, but these errors were encountered: