You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
The trained model program (.ino) for Arduino, which is output after training with Embedded image models, cannot be written with current Arduino_tensorflowlite to Arduino nano 33 BLE Sense as recommend in the GettingStarted.
To Reproduce
Select Embeded Image Model and Training some picture.
Select Export Model. And now, preview is working.
Select Tensorflow lite and Check TensorFlow Lite for Microcontrollers.
const int capDataLen = kCaptureWidth * kCaptureHeight * 2;
If there is not fix, happen some build error.
Select Upload on Arduino IDE.
See bellow error. It seems capacity over memory.
警告:ライブラリArduino_OV767Xはアーキテクチャmbedに対応したものであり、アーキテクチャmbed_nanoで動作するこのボードとは互換性がないかもしれません。
Library Arduino_TensorFlowLite has been declared precompiled:
Precompiled library in "c:\Users\<my user id>\OneDrive\ドキュメント\Arduino\libraries\Arduino_TensorFlowLite\src\cortex-m4\fpv4-sp-d16-softfp" not found
Precompiled library in "c:\Users\<my user id>\OneDrive\ドキュメント\Arduino\libraries\Arduino_TensorFlowLite\src\cortex-m4" not found
C:\Users\<my user id>\AppData\Local\Temp\arduino-sketch-D67097C3451979EF653B299D62F9FCA5/linker_script.ld:138 cannot move location counter backwards (from 20053278 to 2003fc00)
collect2.exe: error: ld returned 1 exit status
exit status 1
Compilation error: exit status 1
Screenshots
Desktop (please complete the following information):
Additional context
I would appreciate any information on the version of the Arduino environment at the time of success working.
Relevant Code
tm_template_script.ino
/* Copyright 2019 The TensorFlow Authors. All Rights Reserved.Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.==============================================================================*/
#include<TensorFlowLite.h>
#include"main_functions.h"
#include"image_provider.h"
#include"model_settings.h"
#include"person_detect_model_data.h"
#include"tensorflow/lite/micro/tflite_bridge/micro_error_reporter.h"
#include"tensorflow/lite/micro/micro_interpreter.h"
#include"tensorflow/lite/micro/micro_mutable_op_resolver.h"
#include"tensorflow/lite/schema/schema_generated.h"// Globals, used for compatibility with Arduino-style sketches.namespace {
tflite::ErrorReporter* error_reporter = nullptr;
const tflite::Model* model = nullptr;
tflite::MicroInterpreter* interpreter = nullptr;
TfLiteTensor* input = nullptr;
// In order to use optimized tensorflow lite kernels, a signed int8_t quantized// model is preferred over the legacy unsigned model format. This means that// throughout this project, input images must be converted from unisgned to// signed format. The easiest and quickest way to convert from unsigned to// signed 8-bit integers is to subtract 128 from the unsigned value to get a// signed value.// An area of memory to use for input, output, and intermediate arrays.constexprintkTensorArenaSize = 136 * 1024;
staticuint8_t tensor_arena[kTensorArenaSize];
} // namespace// The name of this function is important for Arduino compatibility.voidsetup() {
// Set up logging. Google style is to avoid globals or statics because of// lifetime uncertainty, but since this has a trivial destructor it's okay.// NOLINTNEXTLINE(runtime-global-variables)static tflite::MicroErrorReporter micro_error_reporter;
error_reporter = µ_error_reporter;
// Map the model into a usable data structure. This doesn't involve any// copying or parsing, it's a very lightweight operation.
model = tflite::GetModel(g_person_detect_model_data);
if (model->version() != TFLITE_SCHEMA_VERSION) {
TF_LITE_REPORT_ERROR(error_reporter,
"Model provided is schema version %d not equal ""to supported version %d.",
model->version(), TFLITE_SCHEMA_VERSION);
return;
}
// Pull in only the operation implementations we need.// This relies on a complete list of all the ops needed by this graph.// An easier approach is to just use the AllOpsResolver, but this will// incur some penalty in code space for op implementations that are not// needed by this graph.//// tflite::AllOpsResolver resolver;// NOLINTNEXTLINE(runtime-global-variables)static tflite::MicroMutableOpResolver<6> micro_op_resolver;
micro_op_resolver.AddAveragePool2D();
micro_op_resolver.AddConv2D();
micro_op_resolver.AddDepthwiseConv2D();
micro_op_resolver.AddReshape();
micro_op_resolver.AddSoftmax();
micro_op_resolver.AddFullyConnected();
// Build an interpreter to run the model with.// NOLINTNEXTLINE(runtime-global-variables)static tflite::MicroInterpreter static_interpreter(
model, micro_op_resolver, tensor_arena, kTensorArenaSize);
interpreter = &static_interpreter;
// Allocate memory from the tensor_arena for the model's tensors.
TfLiteStatus allocate_status = interpreter->AllocateTensors();
if (allocate_status != kTfLiteOk) {
TF_LITE_REPORT_ERROR(error_reporter, "AllocateTensors() failed");
return;
}
// Get information about the memory area to use for the model's input.
input = interpreter->input(0);
}
voidloop() {
// Get image from provider.if (kTfLiteOk != GetImage(error_reporter, kNumCols, kNumRows, kNumChannels,
input->data.int8)) {
TF_LITE_REPORT_ERROR(error_reporter, "Image capture failed.");
}
// Run the model on this input and make sure it succeeds.if (kTfLiteOk != interpreter->Invoke()) {
TF_LITE_REPORT_ERROR(error_reporter, "Invoke failed.");
}
TfLiteTensor* output = interpreter->output(0);
// Process the inference results.int8_t person_score = output->data.uint8[kPersonIndex];
int8_t no_person_score = output->data.uint8[kNotAPersonIndex];
for (int i = 0; i < kCategoryCount; i++) {
int8_t curr_category_score = output->data.uint8[i];
constchar* currCategory = kCategoryLabels[i];
TF_LITE_REPORT_ERROR(error_reporter, "%s : %d", currCategory, curr_category_score);
}
// Serial.write(input->data.int8, bytesPerFrame);
}
arduino_image_provider.cpp
/* Copyright 2019 The TensorFlow Authors. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ==============================================================================*/
#include"image_provider.h"/* The sample requires the following third-party libraries to be installed and configured: Arducam ------- 1. Download https://github.com/ArduCAM/Arduino and copy its `ArduCAM` subdirectory into `Arduino/libraries`. Commit #e216049 has been tested with this code. 2. Edit `Arduino/libraries/ArduCAM/memorysaver.h` and ensure that "#define OV2640_MINI_2MP_PLUS" is not commented out. Ensure all other defines in the same section are commented out. JPEGDecoder ----------- 1. Install "JPEGDecoder" 1.8.0 from the Arduino library manager. 2. Edit "Arduino/Libraries/JPEGDecoder/src/User_Config.h" and comment out "#define LOAD_SD_LIBRARY" and "#define LOAD_SDFAT_LIBRARY".*/
#if defined(ARDUINO) && !defined(ARDUINO_ARDUINO_NANO33BLE)
#defineARDUINO_EXCLUDE_CODE
#endif// defined(ARDUINO) && !defined(ARDUINO_ARDUINO_NANO33BLE)
#ifndef ARDUINO_EXCLUDE_CODE
#include<Arduino.h>
#include<Arduino_OV767X.h>constintkCaptureWidth = 320;
constintkCaptureHeight = 240;
constint capDataLen = kCaptureWidth * kCaptureHeight * 2;
byte captured_data[kCaptureWidth * kCaptureHeight * 2]; // QVGA: 320x240 X 2 bytes per pixel (RGB565)// Crop image and convert it to grayscale
TfLiteStatus ProcessImage(
tflite::ErrorReporter* error_reporter,
int image_width, int image_height,
int8_t* image_data) {
// Serial.println("begging image process");// const int skip_start_x = ceil((kCaptureWidth - image_width) / 2); // 40.5// const int skip_start_y = ceil((kCaptureHeight - image_height) / 2); // 24// const int skip_end_x_index = (kCaptureWidth - skip_start_x); // (176 - 40) = 135// const int skip_end_y_index = (kCaptureHeight - skip_start_y); // 144 - 24 = 120constint imgSize = 96;
// Color of the current pixeluint16_t color;
for (int y = 0; y < imgSize; y++) {
for (int x = 0; x < imgSize; x++) {
int currentCapX = floor(map(x, 0, imgSize, 40, kCaptureWidth - 80));
int currentCapY = floor(map(y, 0, imgSize, 0, kCaptureHeight));
// Read the color of the pixel as 16-bit integerint read_index = (currentCapY * kCaptureWidth + currentCapX) * 2;//(y * kCaptureWidth + x) * 2;int i2 = (currentCapY * kCaptureWidth + currentCapX + 1) * 2;
int i3 = ((currentCapY + 1) * kCaptureWidth + currentCapX) * 2;
int i4 = ((currentCapY + 1) * kCaptureWidth + currentCapX + 1) * 2;
uint8_t high_byte = captured_data[read_index];
uint8_t low_byte = captured_data[read_index + 1];
color = ((uint16_t)high_byte << 8) | low_byte;
// Extract the color values (5 red bits, 6 green, 5 blue)uint8_t r, g, b;
r = ((color & 0xF800) >> 11) * 8;
g = ((color & 0x07E0) >> 5) * 4;
b = ((color & 0x001F) >> 0) * 8;
// Convert to grayscale by calculating luminance// See https://en.wikipedia.org/wiki/Grayscale for magic numbersfloat gray_value = (0.2126 * r) + (0.7152 * g) + (0.0722 * b);
if (i2 > 0 && i2 < capDataLen - 1) {
high_byte = captured_data[i2];
low_byte = captured_data[i2 + 1];
color = ((uint16_t)high_byte << 8) | low_byte;
r = ((color & 0xF800) >> 11) * 8;
g = ((color & 0x07E0) >> 5) * 4;
b = ((color & 0x001F) >> 0) * 8;
gray_value += (0.2126 * r) + (0.7152 * g) + (0.0722 * b);
}
if (i3 > 0 && i3 < capDataLen - 1) {
high_byte = captured_data[i3];
low_byte = captured_data[i3 + 1];
color = ((uint16_t)high_byte << 8) | low_byte;
r = ((color & 0xF800) >> 11) * 8;
g = ((color & 0x07E0) >> 5) * 4;
b = ((color & 0x001F) >> 0) * 8;
gray_value += (0.2126 * r) + (0.7152 * g) + (0.0722 * b);
}
if (i4 > 0 && i4 < capDataLen - 1) {
high_byte = captured_data[i4];
low_byte = captured_data[i4 + 1];
color = ((uint16_t)high_byte << 8) | low_byte;
r = ((color & 0xF800) >> 11) * 8;
g = ((color & 0x07E0) >> 5) * 4;
b = ((color & 0x001F) >> 0) * 8;
gray_value += (0.2126 * r) + (0.7152 * g) + (0.0722 * b);
}
gray_value = gray_value / 4;
// Convert to signed 8-bit integer by subtracting 128.
gray_value -= 128;
// // The index of this pixel` in our flat output bufferintindex = y * image_width + x;
image_data[index] = static_cast<int8_t>(gray_value);
// delayMicroseconds(10);
}
}
// flushCap();// Serial.println("processed image");// Serial.println("processed image");returnkTfLiteOk;
}
// Get an image from the camera module
TfLiteStatus GetImage(tflite::ErrorReporter* error_reporter, int image_width,
int image_height, int channels, int8_t* image_data) {
staticbool g_is_camera_initialized = false;
if (!g_is_camera_initialized) {
if (!Camera.begin(QVGA, RGB565, 1)) {
returnkTfLiteError;
}
g_is_camera_initialized = true;
}
Camera.readFrame(captured_data);
TfLiteStatus decode_status = ProcessImage(
error_reporter, image_width, image_height, image_data);
if (decode_status != kTfLiteOk) {
TF_LITE_REPORT_ERROR(error_reporter, "DecodeAndProcessImage failed");
return decode_status;
}
returnkTfLiteOk;
}
#endif// ARDUINO_EXCLUDE_CODE
image_provider.h
/* Copyright 2019 The TensorFlow Authors. All Rights Reserved.Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License.==============================================================================*/#ifndefTENSORFLOW_LITE_MICRO_EXAMPLES_PERSON_DETECTION_IMAGE_PROVIDER_H_#defineTENSORFLOW_LITE_MICRO_EXAMPLES_PERSON_DETECTION_IMAGE_PROVIDER_H_#include"tensorflow/lite/c/common.h"#include"tensorflow/lite/micro/tflite_bridge/micro_error_reporter.h"// This is an abstraction around an image source like a camera, and is// expected to return 8-bit sample data. The assumption is that this will be// called in a low duty-cycle fashion in a low-power application. In these// cases, the imaging sensor need not be run in a streaming mode, but rather can// be idled in a relatively low-power mode between calls to GetImage(). The// assumption is that the overhead and time of bringing the low-power sensor out// of this standby mode is commensurate with the expected duty cycle of the// application. The underlying sensor may actually be put into a streaming// configuration, but the image buffer provided to GetImage should not be// overwritten by the driver code until the next call to GetImage();//// The reference implementation can have no platform-specific dependencies, so// it just returns a static image. For real applications, you should// ensure there's a specialized implementation that accesses hardware APIs.TfLiteStatusGetImage(tflite::ErrorReporter*error_reporter, intimage_width,
intimage_height, intchannels, int8_t*image_data);
#endif// TENSORFLOW_LITE_MICRO_EXAMPLES_PERSON_DETECTION_IMAGE_PROVIDER_H_
The text was updated successfully, but these errors were encountered:
Describe the bug
The trained model program (.ino) for Arduino, which is output after training with
Embedded image models
, cannot be written with current Arduino_tensorflowlite to Arduino nano 33 BLE Sense as recommend in the GettingStarted.To Reproduce
Embeded Image Model
and Training some picture.Export Model
. And now, preview is working.Tensorflow lite
and CheckTensorFlow Lite for Microcontrollers
.Download my models
and unzip it.version.h
error_reporter
.capDataLen
.If there is not fix, happen some build error.
Upload
on Arduino IDE.Screenshots
Desktop (please complete the following information):
Arduino Library
Device
https://store-usa.arduino.cc/products/arduino-nano-33-ble-sense
Relevant link
https://github.com/googlecreativelab/teachablemachine-community/blob/master/snippets/markdown/tiny_image/GettingStarted.md
Additional context
I would appreciate any information on the version of the Arduino environment at the time of success working.
Relevant Code
The text was updated successfully, but these errors were encountered: