Coder Social home page Coder Social logo

Comments (15)

eloquentarduino avatar eloquentarduino commented on September 17, 2024

Please send the structure of your CNN (result of .summary() or instantiation code).
Also, add a print after tf.begin() to check if the board can get past this function.
Also, try to increase the TENSOR_ARENA_SIZE as much as possible, just to be sure you have enough memory for the model.

from eloquenttinyml.

eppane avatar eppane commented on September 17, 2024

Many thanks for the quick reply!

Model structure is as follows:

_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 input_2 (InputLayer)        [(None, 1522, 1, 1)]      0         
                                                                 
 conv2d_7 (Conv2D)           (None, 1522, 1, 4)        20        
                                                                 
 conv2d_8 (Conv2D)           (None, 1522, 1, 8)        136       
                                                                 
 max_pooling2d_2 (MaxPooling  (None, 380, 1, 8)        0         
 2D)                                                             
                                                                 
 conv2d_9 (Conv2D)           (None, 380, 1, 8)         264       
                                                                 
 conv2d_10 (Conv2D)          (None, 380, 1, 12)        396       
                                                                 
 max_pooling2d_3 (MaxPooling  (None, 190, 1, 12)       0         
 2D)                                                             
                                                                 
 conv2d_11 (Conv2D)          (None, 190, 1, 12)        588       
                                                                 
 conv2d_12 (Conv2D)          (None, 190, 1, 18)        882       
                                                                 
 conv2d_13 (Conv2D)          (None, 190, 1, 24)        1752      
                                                                 
 global_max_pooling2d_1 (Glo  (None, 24)               0         
 balMaxPooling2D)                                                
                                                                 
 dense_1 (Dense)             (None, 10)                250       
                                                                 
=================================================================
Total params: 4,288
Trainable params: 4,288
Non-trainable params: 0
_________________________________________________________________

I increased the TENSOR_ARENA_SIZE to 150*1024 and I got the following error:

17:36:35.029 -> C:\Users\xxx\Documents\Arduino\libraries\EloquentTinyML\src\eloquent_tinyml\tensorflow\arm\tensorflow\lite\micro\kernels\quant:59 input->type == kTfLiteFloat32 || input->type == kTfLiteInt16 | was not true.
17:36:35.029 -> Node QUANTIZE (number 0f) failed to prepare with status 1
17:36:35.029 -> Past begin!ERROR: Cannot allocate tensors

It seems that I need to address the quantization in code as the library expects float inputs but the model requires integers. I suppose this case needs to be separately implemented in the library.

Do you have an idea how to proceed, or should I try to mimick the TFLite's hello_world -example, where the quantization is dealt with? I can get the quantization parameters with the tflite.Interpreter, here are the input and output details:

[{'name': 'serving_default_input_2:0', 'index': 0, 'shape': array([   1, 1522,    1,    1], dtype=int32), 'shape_signature': array([  -1, 1522,    1,    1], dtype=int32), 'dtype': <class 'numpy.uint8'>, 'quantization': (1.0, 0), 'quantization_parameters': {'scales': array([1.], dtype=float32), 'zero_points': array([0], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}]

[{'name': 'StatefulPartitionedCall:0', 'index': 31, 'shape': array([ 1, 10], dtype=int32), 'shape_signature': array([-1, 10], dtype=int32), 'dtype': <class 'numpy.uint8'>, 'quantization': (0.00390625, 0), 'quantization_parameters': {'scales': array([0.00390625], dtype=float32), 'zero_points': array([0], dtype=int32), 'quantized_dimension': 0}, 'sparsity_parameters': {}}]

The quantized model size is 12544 bytes.

from eloquenttinyml.

eppane avatar eppane commented on September 17, 2024

I decided to try a model with simple post-training quantization, where only model weights are quantized. In this case, the code runs without problems.

from eloquenttinyml.

eloquentarduino avatar eloquentarduino commented on September 17, 2024

from eloquenttinyml.

eppane avatar eppane commented on September 17, 2024

Post-training quantization, i.e., just before converting the model to .tflite. See here: https://www.tensorflow.org/lite/performance/post_training_quantization#integer_only

My board has 256kB RAM-memory available.

Here is a code that is currently working using simple post-training quantized model (https://www.tensorflow.org/lite/performance/post_training_quantization#dynamic_range_quantization):

#include <EloquentTinyML.h>
#include <eloquent_tinyml/tensorflow.h>

#include "CNN_11_gen_clf_conv2d_v2.h"

#define N_INPUTS 1522
#define N_OUTPUTS 10
// in future projects you may need to tweak this value: it's a trial and error process
#define TENSOR_ARENA_SIZE 120*1024

Eloquent::TinyML::TensorFlow::TensorFlow<N_INPUTS, N_OUTPUTS, TENSOR_ARENA_SIZE> tf;

float X_test[N_INPUTS] = {
  0.0,3.0,0.0,1.0,0.0,6.0,212.0,59.0,4.0,49.0,57.0,84.0,88.0,43.0,8.0,0.0,69.0,0.0,0.0,40.0,0.0,1.0,0.0,0.0,234.0,6.0,44.0,62.0,192.0,168.0,1.0,39.0,176.0,28.0,50.0,165.0,18.0,222.0,18.0,222.0,0.0,0.0,0.0,0.0,0.0,0.0,6.0,105.0,80.0,2.0,6.0,105.0,216.0,195.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0
};

int counter = 0;
int inferences = 0;

void setup() {
    Serial.begin(115200);
    
    delay(4000);
    tf.begin(CNN_11_gen_clf_conv2d_v2_tflite);
    Serial.print("Past begin!");
    Serial.println();
  
    // check if model loaded fine
    if (!tf.isOk()) {
        Serial.print("ERROR: ");
        Serial.println(tf.getErrorMessage());
        
        while (true) delay(1000);
    }
}

void loop() {
    int predicted = tf.probaToClass(X_test);
    Serial.print("\t predicted: ");
    Serial.println(predicted);
    inferences += 1;
    delay(100);

    if (inferences > 8) {
      Serial.print("8 inferences done, exiting...");
      exit(0);
    }
}

The TENSOR_ARENA_SIZE must be over 73056 bytes (you get this error if you reduce the size enough).

from eloquenttinyml.

eppane avatar eppane commented on September 17, 2024

I think I made a mistake of using the tf.probaToClass() -function instead of tf.predictClass().

I have two problems now. When I use the simple quantization, I get this error from tf.predictClass():

10:43:24.035 -> C:\Users\xxx\Documents\Arduino\libraries\EloquentTinyML\src\eloquent_tinyml\tensorflow\arm\tensorflow\lite\micro\kernels\cmsis-nn\conv.cpp Hybrid models are not supported on TFLite Micro.
10:43:24.035 -> Node CONV_2D (number 8) failed to invoke with status 1

And, when I use the full-integer quantized model, I get this error from tf.begin() and tf.isOk():

10:48:54.134 -> C:\Users\xxx\Documents\Arduino\libraries\EloquentTinyML\src\eloquent_tinyml\tensorflow\arm\tensorflow\lite\micro\kernels\quant:59 input->type == kTfLiteFloat32 || input->type == kTfLiteInt16 | was not true.
10:48:54.134 -> Node QUANTIZE (number 0f) failed to prepare with status 1
10:48:54.134 -> Past begin!
10:48:54.134 -> ERROR: Cannot allocate tensors

As the simple quantization leaves some operations as floating point, it leads to mixing (or hybrid), and that does not seem to be supported in TFLite Micro.

On the other hand, there does not seem to be support for full-integer quantized models.

I have also tried to convert the model with tinymlgen, but it leads to the hybrid model error.

Interestingly, the wine_model example runs without problems. I am starting to think that there is something wrong with my model structure itself.

Confusing enough? 😄

from eloquenttinyml.

eloquentarduino avatar eloquentarduino commented on September 17, 2024

I think the problem lies in TfLite itself.
Consider that EloquentTinyML is just a wrapper around TensorFlow Lite, so anything not supported by Tf is not supported by EloquentTinyML.
Your best bet is to hope that only a single layer type from your network does not support quantization.
For example, try to remove GlobalMaxPooling2D or MaxPolling2D only leaving Conv2D and Dense.

from eloquenttinyml.

eppane avatar eppane commented on September 17, 2024

Thanks for your advice!

I was able to run now my model. The problem was with my full-integer quantization, I had the following parameters:

converter.inference_input_type = tensorflow.uint8
converter.inference_output_type = tensorflow.uint8

It was suggested here tensorflow/tflite-micro#280 that uint8-quantization is deprecated. So I changed to:

converter.inference_input_type = tensorflow.int8
converter.inference_output_type = tensorflow.int8

and I don't receive any errors. However, I get zero probabilities for my classes now, as follows:

13:45:01.944 -> PROBABILITIES:
13:45:02.080 -> LABEL 0 = -0.00
13:45:02.125 -> LABEL 1 = -0.00
13:45:02.125 -> LABEL 2 = -0.00
13:45:02.125 -> LABEL 3 = -0.00
13:45:02.125 -> LABEL 4 = 0.00
13:45:02.125 -> LABEL 5 = 0.00
13:45:02.125 -> LABEL 6 = -0.00
13:45:02.125 -> LABEL 7 = -0.00
13:45:02.125 -> LABEL 8 = -0.00
13:45:02.125 -> LABEL 9 = -0.00
13:45:02.125 -> 
13:45:02.125 -> PREDICTED CLASS: 4

My code currently (note that I have included the option for entering numbers from Serial monitor):

#include <EloquentTinyML.h>
#include <eloquent_tinyml/tensorflow.h>

#include "CNN_11_gen_clf_conv2d_v5.h"

#define N_INPUTS 1522
#define N_OUTPUTS 10
// in future projects you may need to tweak this value: it's a trial and error process
#define TENSOR_ARENA_SIZE 120*1024

Eloquent::TinyML::TensorFlow::TensorFlow<N_INPUTS, N_OUTPUTS, TENSOR_ARENA_SIZE> tf;

float X_test[N_INPUTS] = {
  0.0,3.0,0.0,1.0,0.0,6.0,212.0,59.0,4.0,49.0,57.0,84.0,88.0,43.0,8.0,0.0,69.0,0.0,0.0,40.0,0.0,1.0,0.0,0.0,234.0,6.0,44.0,62.0,192.0,168.0,1.0,39.0,176.0,28.0,50.0,165.0,18.0,222.0,18.0,222.0,0.0,0.0,0.0,0.0,0.0,0.0,6.0,105.0,80.0,2.0,6.0,105.0,216.0,195.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0
};

float output[N_OUTPUTS] = { 0 };

boolean newData = false;
int counter = 0;
int inferences = 0;

float receivedFloat;

void setup() {
    Serial.begin(115200);
    
    delay(4000);
    tf.begin(CNN_11_gen_clf_conv2d_v5_tflite);
    Serial.print("PAST BEGIN!\n");
  
    // check if model loaded fine
    if (!tf.isOk()) {
        Serial.print("ERROR: ");
        Serial.println(tf.getErrorMessage());
        
        while (true) delay(1000);
    }
}

void loop() {
    recvOneFloat();
    showNewData();
    doInference();
}

void recvOneFloat() {
  if (Serial.available() > 0) {
    receivedFloat = Serial.parseFloat();
    X_test[counter] = receivedFloat;
    
    newData = true;
    counter += 1;
  }
}

void showNewData() {
  if (newData == true) {
    Serial.print("RECEIVED: ");
    Serial.println(X_test[counter-1]);
    newData = false;
  }
}

void showResults() {
  for (int i = 0; i < N_OUTPUTS; i++) {
      Serial.print("LABEL ");
      Serial.print(i);
      Serial.print(" = ");
      Serial.println(output[i]);
    }
  }

void doInference() {
  if (inferences == 0) {
      Serial.println("========== DO INFERENCE ==========");
      Serial.print("\nPROBABILITIES:\n");
      tf.predict(X_test, output);
      showResults();
      Serial.print("\nPREDICTED CLASS: ");
      Serial.print(tf.predictClass(X_test));
      
      inferences += 1;
      delay(100);  
    }
    
  if (counter == N_INPUTS) {
    Serial.println("========== DO INFERENCE ==========");
    Serial.print("\nPROBABILITIES:\n");
    tf.predict(X_test, output);
    showResults();
    Serial.print("\nPREDICTED CLASS: ");
    Serial.print(tf.predictClass(X_test));
    
    counter = 0;
    inferences += 1;
    delay(100);

    Serial.print("\nINFERENCES DONE: ");
    Serial.println(inferences);
    Serial.println();
    //exit(0);
  }
}

In Python, I get sensible results when I quantize the inputs properly using the quantization parameters (scales, zero_points). It seems though that in AbstractTensorflow.h the scaling is accounted for, or is it?

from eloquenttinyml.

eloquentarduino avatar eloquentarduino commented on September 17, 2024

To turn input scaling on you must call tf.turnInputScalingOn() after begin().
If you can share your Python script, I can try to replicate.

from eloquenttinyml.

eloquentarduino avatar eloquentarduino commented on September 17, 2024

I think I had a bug in feature scaling (input and output formula were reversed). Be sure you update the library to latest version (2.4.2, pushed online just now).

from eloquenttinyml.

eppane avatar eppane commented on September 17, 2024

Thanks for the quick reply and for the tips!

I updated the library, commented out the #define __SXTB16_RORn(ARG1, ARG2) __SXTB16(__ROR(ARG1, ARG2)) from arm.math.h, turned the input scaling on, but the outputs still remain zero.

I tried also turning the output scaling on, but no luck there either.

Part of the code for evaluating the TFLite-model is as follows:

interpreter = tflite.Interpreter(model_path=f"{models_path_leq}/CNN_11_gen_clf_conv2d_v5.tflite")
interpreter.allocate_tensors()

input_details = interpreter.get_input_details()[0]
output_details = interpreter.get_output_details()[0]

input_index = input_details["index"]
output_index = output_details["index"]

labels, times = [], [] 

# Test the TFLite-converted model

for i, ID in enumerate(val_paths_leq):
    X = np.load(ID)
    X = pad_sequences([X], maxlen=1522, dtype='float32', padding='post', value=0.0)
    
    start = time.perf_counter()
    
    # Input scaling
    # See here https://www.tensorflow.org/lite/performance/post_training_integer_quant#run_the_tensorflow_lite_models
    if input_details['dtype'] == np.int8:
        input_scale, input_zero_point = input_details["quantization"]
        X = X / input_scale + input_zero_point
    
    X = X.reshape((1, 1522, 1, 1)).astype(input_details["dtype"])
    
    interpreter.set_tensor(input_index, X)
    interpreter.invoke()
    pred = interpreter.get_tensor(output_index)
    
    # Output scaling
    # See here https://colab.research.google.com/gist/ymodak/0dfeb28255e189c5c48d9093f296e9a8/tensorflow-lite-debugger-colab.ipynb#scrollTo=D7XL1JBR6iyP
    output_scale, output_zero_point = output_details['quantization']
    if output_details['dtype'] == np.int8:
        pred = pred.astype(np.float32)
        pred = (pred - output_zero_point) * output_scale
    
    end = time.perf_counter()
    times.append(end-start)
    
    label = np.argmax(pred, axis=1)
    labels.append(label)

pad_sequences() comes from tensorflow.keras.preprocessing.sequence.

from eloquenttinyml.

eppane avatar eppane commented on September 17, 2024

In TFLite Micro's hello_world example, here are the scalings:

// Quantize the input from floating-point to integer
int8_t x_quantized = x / input->params.scale + input->params.zero_point;

and

// Dequantize the output from integer to floating-point
float y = (y_quantized - output->params.zero_point) * output->params.scale;

So I tried to modify AbstractTensorFlow.h as follows:

          template<typename T>
          T scaleInput(T x) {
              //return (x - this->input->params.zero_point) * this->input->params.scale;
		 return (x / this->input->params.scale) + this->input->params.zero_point;
          }

          template<typename T>
          T scaleOutput(T y) {
              //return (y / this->output->params.zero_point) + this->output->params.scale;
		 return (y - this->output->params.zero_point) * this->output->params.scale;
          }

This resulted in:

18:18:11.217 -> PAST BEGIN!
18:18:11.217 -> ========== DO INFERENCE ==========
18:18:11.217 -> 
18:18:11.217 -> PROBABILITIES:
18:18:11.397 -> LABEL 0 = 0.50
18:18:11.397 -> LABEL 1 = 0.50
18:18:11.397 -> LABEL 2 = 0.50
18:18:11.397 -> LABEL 3 = 0.50
18:18:11.397 -> LABEL 4 = 0.50
18:18:11.397 -> LABEL 5 = ovf
18:18:11.397 -> LABEL 6 = 0.50
18:18:11.397 -> LABEL 7 = 0.50
18:18:11.397 -> LABEL 8 = 0.50
18:18:11.397 -> LABEL 9 = 0.50
18:18:11.397 -> 
18:18:11.397 -> PREDICTED CLASS: 4

I have tried with some other inputs as well, but got 0.50 probabilities for all.

from eloquenttinyml.

eppane avatar eppane commented on September 17, 2024

I think that TFLite Micro does not support GlobalMaxPooling, at least there is no equivalent in tensorflow\lite\micro\kernels\micro_ops.h or in https://www.tensorflow.org/lite/guide/op_select_allowlist.

The model runs though on the device..

from eloquenttinyml.

eppane avatar eppane commented on September 17, 2024

It seems that tensorflow/lite/micro/kernels/reduce.cc implements Global versions of poolings (tensorflow/tensorflow#43332).

So I decided to try directly with TFLite Micro, modifying the hello_world example, as follows:

#include <TensorFlowLite.h>

#include "tensorflow/lite/micro/all_ops_resolver.h"
#include "tensorflow/lite/micro/micro_error_reporter.h"
#include "tensorflow/lite/micro/micro_interpreter.h"
#include "tensorflow/lite/schema/schema_generated.h"
#include "tensorflow/lite/version.h"

#include "CNN_11_gen_clf_conv2d_v5.h"

// Globals, used for compatibility with Arduino-style sketches.
namespace {
tflite::ErrorReporter* error_reporter = nullptr;
const tflite::Model* model = nullptr;
tflite::MicroInterpreter* interpreter = nullptr;
TfLiteTensor* input = nullptr;
TfLiteTensor* output = nullptr;
int inference_count = 0;

constexpr int kTensorArenaSize = 120*1024;
uint8_t tensor_arena[kTensorArenaSize];
}  // namespace

float X_test[1522] = {
  0.0,3.0,0.0,1.0,0.0,6.0,0.0,12.0,41.0,210.0,176.0,2.0,0.0,0.0,8.0,0.0,69.0,0.0,3.0,230.0,46.0,11.0,64.0,0.0,64.0,6.0,132.0,94.0,192.0,168.0,1.0,152.0,192.0,168.0,1.0,192.0,7.0,88.0,158.0,123.0,183.0,233.0,168.0,76.0,22.0,51.0,241.0,86.0,128.0,24.0,0.0,243.0,161.0,122.0,0.0,0.0,1.0,1.0,8.0,10.0,156.0,88.0,6.0,198.0,0.0,85.0,131.0,97.0,129.0,126.0,3.0,174.0,91.0,123.0,34.0,116.0,111.0,112.0,105.0,99.0,34.0,58.0,34.0,115.0,116.0,97.0,116.0,117.0,115.0,47.0,53.0,51.0,54.0,99.0,55.0,102.0,49.0,56.0,46.0,48.0,57.0,56.0,52.0,50.0,56.0,34.0,44.0,34.0,100.0,97.0,116.0,97.0,34.0,58.0,123.0,34.0,116.0,101.0,120.0,116.0,34.0,58.0,34.0,104.0,116.0,116.0,112.0,105.0,110.0,46.0,115.0,116.0,97.0,116.0,117.0,115.0,46.0,114.0,101.0,113.0,117.0,101.0,115.0,116.0,105.0,110.0,103.0,34.0,44.0,34.0,102.0,105.0,108.0,108.0,34.0,58.0,34.0,98.0,108.0,117.0,101.0,34.0,44.0,34.0,115.0,104.0,97.0,112.0,101.0,34.0,58.0,34.0,100.0,111.0,116.0,34.0,125.0,125.0,44.0,123.0,34.0,116.0,111.0,112.0,105.0,99.0,34.0,58.0,34.0,115.0,116.0,97.0,116.0,117.0,115.0,47.0,53.0,51.0,54.0,99.0,55.0,102.0,49.0,56.0,46.0,48.0,57.0,56.0,52.0,50.0,56.0,34.0,44.0,34.0,100.0,97.0,116.0,97.0,34.0,58.0,123.0,125.0,125.0,44.0,123.0,34.0,116.0,111.0,112.0,105.0,99.0,34.0,58.0,34.0,115.0,116.0,97.0,116.0,117.0,115.0,47.0,101.0,100.0,51.0,100.0,48.0,101.0,57.0,51.0,46.0,97.0,54.0,57.0,56.0,54.0,34.0,44.0,34.0,100.0,97.0,116.0,97.0,34.0,58.0,123.0,34.0,116.0,101.0,120.0,116.0,34.0,58.0,34.0,32.0,34.0,44.0,34.0,102.0,105.0,108.0,108.0,34.0,58.0,34.0,98.0,108.0,117.0,101.0,34.0,44.0,34.0,115.0,104.0,97.0,112.0,101.0,34.0,58.0,34.0,100.0,111.0,116.0,34.0,125.0,125.0,44.0,123.0,34.0,116.0,111.0,112.0,105.0,99.0,34.0,58.0,34.0,100.0,101.0,98.0,117.0,103.0,34.0,44.0,34.0,100.0,97.0,116.0,97.0,34.0,58.0,123.0,34.0,105.0,100.0,34.0,58.0,34.0,99.0,55.0,49.0,56.0,53.0,53.0,51.0,48.0,46.0,53.0,49.0,99.0,99.0,102.0,56.0,34.0,44.0,34.0,122.0,34.0,58.0,34.0,50.0,55.0,50.0,51.0,99.0,54.0,56.0,100.0,46.0,97.0,55.0,100.0,99.0,52.0,97.0,34.0,44.0,34.0,110.0,97.0,109.0,101.0,34.0,58.0,34.0,34.0,44.0,34.0,116.0,111.0,112.0,105.0,99.0,34.0,58.0,34.0,34.0,44.0,34.0,112.0,114.0,111.0,112.0,101.0,114.0,116.0,121.0,34.0,58.0,34.0,112.0,97.0,121.0,108.0,111.0,97.0,100.0,34.0,44.0,34.0,109.0,115.0,103.0,34.0,58.0,34.0,60.0,104.0,116.0,109.0,108.0,62.0,92.0,110.0,32.0,32.0,32.0,32.0,60.0,104.0,101.0,97.0,100.0,62.0,60.0,116.0,105.0,116.0,108.0,101.0,62.0,92.0,110.0,32.0,32.0,32.0,32.0,32.0,32.0,32.0,32.0,84.0,101.0,115.0,116.0,32.0,112.0,97.0,103.0,101.0,32.0,53.0,92.0,110.0,32.0,32.0,32.0,32.0,60.0,47.0,116.0,105.0,116.0,108.0,101.0,62.0,60.0,47.0,104.0,101.0,97.0,100.0,62.0,92.0,110.0,32.0,32.0,32.0,32.0,60.0,98.0,111.0,100.0,121.0,62.0,92.0,110.0,32.0,32.0,32.0,32.0,32.0,32.0,32.0,32.0,87.0,101.0,108.0,99.0,111.0,109.0,101.0,32.0,78.0,105.0,99.0,107.0,44.0,32.0,116.0,111.0,32.0,112.0,97.0,103.0,101.0,32.0,53.0,33.0,92.0,110.0,32.0,32.0,32.0,32.0,60.0,47.0,98.0,111.0,100.0,121.0,62.0,92.0,110.0,60.0,47.0,104.0,116.0,109.0,108.0,62.0,34.0,44.0,34.0,102.0,111.0,114.0,109.0,97.0,116.0,34.0,58.0,34.0,115.0,116.0,114.0,105.0,110.0,103.0,91.0,49.0,50.0,56.0,93.0,34.0,125.0,125.0,44.0,123.0,34.0,116.0,111.0,112.0,105.0,99.0,34.0,58.0,34.0,115.0,116.0,97.0,116.0,117.0,115.0,47.0,97.0,102.0,99.0,50.0,97.0,48.0,99.0,57.0,46.0,99.0,52.0,99.0,52.0,99.0,56.0,34.0,44.0,34.0,100.0,97.0,116.0,97.0,34.0,58.0,123.0,34.0,116.0,101.0,120.0,116.0,34.0,58.0,34.0,32.0,34.0,44.0,34.0,102.0,105.0,108.0,108.0,34.0,58.0,34.0,98.0,108.0,117.0,101.0,34.0,44.0,34.0,115.0,104.0,97.0,112.0,101.0,34.0,58.0,34.0,100.0,111.0,116.0,34.0,125.0,125.0,44.0,123.0,34.0,116.0,111.0,112.0,105.0,99.0,34.0,58.0,34.0,100.0,101.0,98.0,117.0,103.0,34.0,44.0,34.0,100.0,97.0,116.0,97.0,34.0,58.0,123.0,34.0,105.0,100.0,34.0,58.0,34.0,102.0,98.0,55.0,53.0,51.0,54.0,55.0,53.0,46.0,56.0,102.0,49.0,51.0,50.0,56.0,34.0,44.0,34.0,122.0,34.0,58.0,34.0,50.0,55.0,50.0,51.0,99.0,54.0,56.0,100.0,46.0,97.0,55.0,100.0,99.0,52.0,97.0,34.0,44.0,34.0,110.0,97.0,109.0,101.0,34.0,58.0,34.0,34.0,44.0,34.0,116.0,111.0,112.0,105.0,99.0,34.0,58.0,34.0,34.0,44.0,34.0,112.0,114.0,111.0,112.0,101.0,114.0,116.0,121.0,34.0,58.0,34.0,112.0,97.0,121.0,108.0,111.0,97.0,100.0,34.0,44.0,34.0,109.0,115.0,103.0,34.0,58.0,34.0,123.0,92.0,110.0,32.0,92.0,34.0,105.0,100.0,92.0,34.0,58.0,32.0,54.0,44.0,92.0,110.0,32.0,92.0,34.0,100.0,101.0,118.0,105.0,99.0,101.0,32.0,116.0,105.0,116.0,108.0,101.0,92.0,34.0,58.0,32.0,92.0,34.0,83.0,109.0,97.0,114.0,116.0,32.0,84.0,104.0,101.0,114.0,109.0,111.0,115.0,116.0,97.0,116.0,92.0,34.0,44.0,92.0,110.0,32.0,92.0,34.0,99.0,117.0,114.0,114.0,101.0,110.0,116.0,32.0,116.0,101.0,109.0,112.0,101.0,114.0,97.0,116.0,117.0,114.0,101.0,92.0,34.0,58.0,32.0,50.0,57.0,46.0,57.0,51.0,49.0,57.0,53.0,57.0,48.0,51.0,54.0,52.0,57.0,55.0,57.0,49.0,53.0,44.0,92.0,110.0,32.0,92.0,34.0,65.0,67.0,95.0,115.0,116.0,97.0,116.0,101.0,92.0,34.0,58.0,32.0,116.0,114.0,117.0,101.0,44.0,92.0,110.0,32.0,92.0,34.0,115.0,116.0,97.0,116.0,101.0,32.0,111.0,102.0,32.0,116.0,104.0,101.0,114.0,109.0,111.0,115.0,116.0,97.0,116.0,92.0,34.0,58.0,32.0,92.0,34.0,72.0,111.0,117.0,115.0,101.0,32.0,116.0,101.0,109.0,112.0,101.0,114.0,97.0,116.0,117.0,114.0,101.0,32.0,50.0,57.0,46.0,57.0,51.0,49.0,57.0,53.0,57.0,48.0,51.0,54.0,52.0,57.0,55.0,57.0,49.0,53.0,32.0,65.0,67.0,32.0,111.0,110.0,46.0,92.0,34.0,92.0,110.0,125.0,34.0,44.0,34.0,102.0,111.0,114.0,109.0,97.0,116.0,34.0,58.0,34.0,79.0,98.0,106.0,101.0,99.0,116.0,34.0,125.0,125.0,93.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0
};

// The name of this function is important for Arduino compatibility.
void setup() {
  // Set up logging. Google style is to avoid globals or statics because of
  // lifetime uncertainty, but since this has a trivial destructor it's okay.
  // NOLINTNEXTLINE(runtime-global-variables)
  static tflite::MicroErrorReporter micro_error_reporter;
  error_reporter = &micro_error_reporter;

  // Map the model into a usable data structure. This doesn't involve any
  // copying or parsing, it's a very lightweight operation.
  model = tflite::GetModel(CNN_11_gen_clf_conv2d_v5_tflite);
  if (model->version() != TFLITE_SCHEMA_VERSION) {
    TF_LITE_REPORT_ERROR(error_reporter,
                         "Model provided is schema version %d not equal "
                         "to supported version %d.",
                         model->version(), TFLITE_SCHEMA_VERSION);
    return;
  }

  // This pulls in all the operation implementations we need.
  // NOLINTNEXTLINE(runtime-global-variables)
  static tflite::AllOpsResolver resolver;

  // Build an interpreter to run the model with.
  static tflite::MicroInterpreter static_interpreter(
      model, resolver, tensor_arena, kTensorArenaSize, error_reporter);
      //model, micro_op_resolver, tensor_arena, kTensorArenaSize, error_reporter);
  interpreter = &static_interpreter;

  // Allocate memory from the tensor_arena for the model's tensors.
  TfLiteStatus allocate_status = interpreter->AllocateTensors();
  if (allocate_status != kTfLiteOk) {
    TF_LITE_REPORT_ERROR(error_reporter, "AllocateTensors() failed");
    return;
  }

  // Obtain pointers to the model's input and output tensors.
  input = interpreter->input(0);
  output = interpreter->output(0);
}

// The name of this function is important for Arduino compatibility.
void loop() {
  // Place the quantized input in the model's input tensor
  for (int i = 0; i < 1522; i++) {
    input->data.int8[i] = (int8_t)((X_test[i] / input->params.scale) + input->params.zero_point);  
  }

  // Run inference, and report any error
  TfLiteStatus invoke_status = interpreter->Invoke();
  if (invoke_status != kTfLiteOk) {
    TF_LITE_REPORT_ERROR(error_reporter, "Invoke failed on x: %f\n",
                         static_cast<float>(X_test[0]));
    return;
  }
  
  // Dequantize the output from integer to floating-point
  float y[10] = { -1.0 };
  for (int i = 0; i < 10; i++) {
    y[i] = (float)((output->data.int8[i] - output->params.zero_point) * output->params.scale);
  }

  Serial.println("PREDICTED CLASS: ");
  for (int i = 0; i < 10; i++) {
      Serial.print("LABEL ");
      Serial.print(i);
      Serial.print(" = ");
      Serial.println(y[i]);
    }
 
  Serial.println("Next inference...\n");
  delay(3000);
}

The output is:

09:49:15.768 -> PREDICTED CLASS: 
09:49:15.768 -> LABEL 0 = 1.00
09:49:15.768 -> LABEL 1 = 0.00
09:49:15.768 -> LABEL 2 = 0.00
09:49:15.768 -> LABEL 3 = 0.00
09:49:15.768 -> LABEL 4 = 0.00
09:49:15.768 -> LABEL 5 = 0.00
09:49:15.768 -> LABEL 6 = 0.00
09:49:15.768 -> LABEL 7 = 0.00
09:49:15.768 -> LABEL 8 = 0.00
09:49:15.768 -> LABEL 9 = 0.00
09:49:15.768 -> Next inference...
09:49:15.768 -> 

The label is correct for X_test. Now it seems to be working, but I'll continue testing with other inputs as well.

Hopefully we'll manage to get it work with the library too!

from eloquenttinyml.

eppane avatar eppane commented on September 17, 2024

I added also the functionality to enter inputs from the Serial monitor.

I was able to get right results for all labels. 👍 So the TFLM-code seems to be working well now!

from eloquenttinyml.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.