Coder Social home page Coder Social logo

flutter-tflite's Introduction

Python PyPI DOI CII Best Practices OpenSSF Scorecard Fuzzing Status Fuzzing Status OSSRank Contributor Covenant TF Official Continuous TF Official Nightly

Documentation
Documentation

TensorFlow is an end-to-end open source platform for machine learning. It has a comprehensive, flexible ecosystem of tools, libraries, and community resources that lets researchers push the state-of-the-art in ML and developers easily build and deploy ML-powered applications.

TensorFlow was originally developed by researchers and engineers working within the Machine Intelligence team at Google Brain to conduct research in machine learning and neural networks. However, the framework is versatile enough to be used in other areas as well.

TensorFlow provides stable Python and C++ APIs, as well as a non-guaranteed backward compatible API for other languages.

Keep up-to-date with release announcements and security updates by subscribing to [email protected]. See all the mailing lists.

Install

See the TensorFlow install guide for the pip package, to enable GPU support, use a Docker container, and build from source.

To install the current release, which includes support for CUDA-enabled GPU cards (Ubuntu and Windows):

$ pip install tensorflow

Other devices (DirectX and MacOS-metal) are supported using Device plugins.

A smaller CPU-only package is also available:

$ pip install tensorflow-cpu

To update TensorFlow to the latest version, add --upgrade flag to the above commands.

Nightly binaries are available for testing using the tf-nightly and tf-nightly-cpu packages on PyPi.

Try your first TensorFlow program

$ python
>>> import tensorflow as tf
>>> tf.add(1, 2).numpy()
3
>>> hello = tf.constant('Hello, TensorFlow!')
>>> hello.numpy()
b'Hello, TensorFlow!'

For more examples, see the TensorFlow tutorials.

Contribution guidelines

If you want to contribute to TensorFlow, be sure to review the contribution guidelines. This project adheres to TensorFlow's code of conduct. By participating, you are expected to uphold this code.

We use GitHub issues for tracking requests and bugs, please see TensorFlow Forum for general questions and discussion, and please direct specific questions to Stack Overflow.

The TensorFlow project strives to abide by generally accepted best practices in open-source software development.

Patching guidelines

Follow these steps to patch a specific version of TensorFlow, for example, to apply fixes to bugs or security vulnerabilities:

  • Clone the TensorFlow repo and switch to the corresponding branch for your desired TensorFlow version, for example, branch r2.8 for version 2.8.
  • Apply (that is, cherry-pick) the desired changes and resolve any code conflicts.
  • Run TensorFlow tests and ensure they pass.
  • Build the TensorFlow pip package from source.

Continuous build status

You can find more community-supported platforms and configurations in the TensorFlow SIG Build community builds table.

Official Builds

Build Type Status Artifacts
Linux CPU Status PyPI
Linux GPU Status PyPI
Linux XLA Status TBA
macOS Status PyPI
Windows CPU Status PyPI
Windows GPU Status PyPI
Android Status Download
Raspberry Pi 0 and 1 Status Py3
Raspberry Pi 2 and 3 Status Py3
Libtensorflow MacOS CPU Status Temporarily Unavailable Nightly Binary Official GCS
Libtensorflow Linux CPU Status Temporarily Unavailable Nightly Binary Official GCS
Libtensorflow Linux GPU Status Temporarily Unavailable Nightly Binary Official GCS
Libtensorflow Windows CPU Status Temporarily Unavailable Nightly Binary Official GCS
Libtensorflow Windows GPU Status Temporarily Unavailable Nightly Binary Official GCS

Resources

Learn more about the TensorFlow community and how to contribute.

Courses

License

Apache License 2.0

flutter-tflite's People

Contributors

aashrut avatar alexi-zemcov avatar amansikarwar avatar captaindario avatar densa avatar gregorscholz avatar luiscib3r avatar mdejeans avatar mirland avatar mlopez0 avatar mozharovsky avatar paultr avatar pperle avatar qu-ngx avatar readytopark avatar saksham-gt avatar st-nhanho avatar vadympinchuk avatar windmaple avatar yamhoresh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

flutter-tflite's Issues

MacOs support

Is it also possible to use it with a macos target? 🤔

How to enable Flex delegates?

On TFLite's Android plugin, Flex delegates can be enabled by adding implementation 'org.tensorflow:tensorflow-lite-select-tf-ops:2.12.0' to the dependencies within android/build.gradle, and by attaching the delegate to the interpreter's options:

final Interpreter.Options options = new Interpreter.Options();
FlexDelegate flexDelegate = new FlexDelegate();
options.addDelegate(flexDelegate);
final interpreter = new Interpreter(buffer, options);

How can we accomplish the same on Flutter?

Thanks in advance!

ERROR: tensorflow/lite/kernels/concatenation.cc:159 t->dims->data[d] != t0->dims->data[d] (1 != 2)

Hi,

I convert my model from ONNX to TensorFlow and I'm using the image_classification_mobilenet example to test it with flutter-tflite plugin, but I have this error

tensorflow/lite/kernels/concatenation.cc:159 t->dims->data[d] != t0->dims->data[d] (1 != 2)
ERROR: tensorflow/lite/kernels/concatenation.cc:159 t->dims->data[d] != t0->dims->data[d] (1 != 2)
Node number 6 (CONCATENATION) failed to prepare.
ERROR: Node number 6 (CONCATENATION) failed to prepare.
[VERBOSE-2:dart_vm_initializer.cc(41)] Unhandled Exception: Bad state: failed precondition
#0 checkState (package:quiver/check.dart:74:5)
#1 Interpreter.allocateTensors (package:tflite_flutter/src/interpreter.dart:156:5)
#2 Interpreter.runInference (package:tflite_flutter/src/interpreter.dart:204:7)
#3 Interpreter.runForMultipleInputs (package:tflite_flutter/src/interpreter.dart:180:5)
#4 Interpreter.run (package:tflite_flutter/src/interpreter.dart:172:5)
#5 _HomeState.runInference (package:myproject/main.dart:151:17)
#6 _HomeState.processImage (package:myproject/main.dart:137:7)
#7 _HomeState.build. (package:myproject/main.dart:257:23)

[✓] Flutter (Channel stable, 3.10.2, on macOS 13.1 22C65 darwin-arm64, locale en-FR)
[✓] Android toolchain - develop for Android devices (Android SDK version 30.0.3)
[✓] Xcode - develop for iOS and macOS (Xcode 14.2)
[✓] Chrome - develop for the web
[✓] Android Studio (version 4.0)
[✓] Android Studio (version 2021.1)
[✓] IntelliJ IDEA Community Edition (version 2022.3.1)
[✓] VS Code (version 1.78.1)
[✓] Connected device (1 available)
[✓] Network resources

• No issues found!

any idea please ?

Invalid argument(s): Failed to lookup symbol 'TFLGpuDelegateCreate' : undefined symbol: TFLGpuDelegateCreate

Dart version : Dart SDK version: 2.19.6 (stable)
Flutter version : Flutter 3.7.9
tflite_flutter: ^0.10.0

image

Flutter code load interpreter :

   Future<Interpreter> loadModel() async {
        final options = InterpreterOptions()..threads = 6;
        options.addDelegate(GpuDelegate());
        // Load model from assets
        final interpreter = await Interpreter.fromAsset(
            "assets/tflite/keypoint.tflite",
            options: options);
        return interpreter;
      }

Build Gradle :

def localProperties = new Properties()
def localPropertiesFile = rootProject.file('local.properties')
if (localPropertiesFile.exists()) {
    localPropertiesFile.withReader('UTF-8') { reader ->
        localProperties.load(reader)
    }
}

def flutterRoot = localProperties.getProperty('flutter.sdk')
if (flutterRoot == null) {
    throw new GradleException("Flutter SDK not found. Define location with flutter.sdk in the local.properties file.")
}

def flutterVersionCode = localProperties.getProperty('flutter.versionCode')
if (flutterVersionCode == null) {
    flutterVersionCode = '1'
}

def flutterVersionName = localProperties.getProperty('flutter.versionName')
if (flutterVersionName == null) {
    flutterVersionName = '1.0'
}

apply plugin: 'com.android.application'
apply plugin: 'kotlin-android'
apply from: "$flutterRoot/packages/flutter_tools/gradle/flutter.gradle"

android {
    packagingOptions {
        exclude("META-INF/DEPENDENCIES")
        exclude("META-INF/LICENSE")
        exclude("META-INF/LICENSE.txt")
        exclude("META-INF/license.txt")
        exclude("META-INF/NOTICE")
        exclude("META-INF/NOTICE.txt")
        exclude("META-INF/notice.txt")
        exclude("META-INF/ASL2.0")
        exclude("META-INF/*.kotlin_module")
    }
    compileSdkVersion 33

    compileOptions {
        sourceCompatibility 1.8
        targetCompatibility 1.8
    }

    kotlinOptions {
        jvmTarget = '1.8'
    }

    sourceSets {
        main.java.srcDirs += 'src/main/kotlin'
    }

    lintOptions {
        checkReleaseBuilds false
    }

    defaultConfig {
        // TODO: Specify your own unique Application ID (https://developer.android.com/studio/build/application-id.html).
        applicationId "com.exaple.app"
        minSdkVersion 22
        targetSdkVersion 33
        versionCode flutterVersionCode.toInteger()
        versionName flutterVersionName
        multiDexEnabled true
        // ndk {
        //     abiFilters "armeabi", "x86", "armeabi-v7a","x86_64", "mips",
        //             "mips64", "arm64-v8a"
        // }
        ndk {
            abiFilters "arm64-v8a", "armeabi-v7a", "x86", "x86_64"
            }
	  externalNativeBuild {
            cmake {
                // Enabling exceptions, RTTI
                // And setting C++ standard version
                cppFlags '-frtti -fexceptions -std=c++11'

                // Shared runtime for shared libraries
                arguments "-DANDROID_STL=c++_shared"
            }
        }
    }
     
    externalNativeBuild {
        cmake {
            path "CMakeLists.txt"
        }
    }

    buildTypes {
        release {
            // TODO: Add your own signing config for the release build.
            // Signing with the debug keys for now, so `flutter run --release` works.
            signingConfig signingConfigs.debug
        }
    }

    flavorDimensions "flavors"
    productFlavors {
        dev {
            dimension "flavors"
            applicationIdSuffix ".dev"
            versionNameSuffix "-dev"
        }
        staging {
            dimension "flavors"
            applicationIdSuffix ".staging"
            versionNameSuffix "-staging"
        }
        prod {
            dimension "flavors"
        }
    }

}

flutter {
    source '../..'
}

dependencies {
    implementation "org.jetbrains.kotlin:kotlin-stdlib-jdk7:$kotlin_version"
    implementation 'com.android.support:multidex:1.0.3'
    implementation fileTree(dir: 'libs', include: ['*.jar'])
    implementation("androidx.core:core-ktx:1.9.0")
    implementation 'com.appsflyer:af-android-sdk:6.3.2'
    implementation("androidx.lifecycle:lifecycle-process:2.2.0")
    implementation("com.moengage:moe-android-sdk:12.5.04")
    implementation 'com.facebook.android:facebook-android-sdk:latest.release'
    implementation 'androidx.lifecycle:lifecycle-viewmodel-ktx:2.5.1'
    implementation("org.jetbrains.kotlinx:kotlinx-coroutines-android:1.6.4")
    implementation 'com.tom-roush:pdfbox-android:2.0.27.0'
}

apply plugin: 'com.google.gms.google-services'
apply plugin: 'com.google.firebase.crashlytics'

Failed to load dynamic library 'libtensorflowlite_jni.so':

  • Trying to run in an Android Physical Device.
  • The code is very similar if not identical to one of the example apps.

This line fails:

interpreter = await Interpreter.fromAsset(modelPath);

With the following error:

[ERROR:flutter/runtime/dart_vm_initializer.cc(41)] Unhandled Exception: 
Invalid argument(s): Failed to load dynamic library 'libtensorflowlite_jni.so': dlopen failed: cannot locate symbol "strtod_l" referenced by "/data/app/com.pdl.tflite_poc-2/lib/arm64/libtensorflowlite_jni.so"...

Pubspect dependencies

  tflite_flutter: ^0.10.0
  image_picker: ^0.8.7+5
  image: ^4.0.17

The entire code is this here:

import 'dart:io';

import 'package:flutter/material.dart';
import 'package:flutter/services.dart';
import 'package:image_picker/image_picker.dart';
import 'package:image/image.dart' as img;
import 'package:tflite_flutter/tflite_flutter.dart';

void main() {
  runApp(const MyApp());
}

class MyApp extends StatelessWidget {
  const MyApp({super.key});

  @override
  Widget build(BuildContext context) {
    return MaterialApp(
      title: 'Flutter Demo',
      theme: ThemeData(
        colorScheme: ColorScheme.fromSeed(seedColor: Colors.deepPurple),
        useMaterial3: true,
      ),
      home: const MyHomePage(),
    );
  }
}

class MyHomePage extends StatefulWidget {
  const MyHomePage({Key? key}) : super(key: key);

  @override
  State<MyHomePage> createState() => _MyHomePageState();
}

class _MyHomePageState extends State<MyHomePage> {
  static const modelPath = 'assets/model.tflite';
  static const labelsPath = 'assets/labels.txt';

  late final Interpreter interpreter;
  late final List<String> labels;

  Tensor? inputTensor;
  Tensor? outputTensor;

  final imagePicker = ImagePicker();
  String? imagePath;
  img.Image? image;

  Map<String, int>? classification;

  @override
  void initState() {
    super.initState();
    _loadModel();
    _loadLabels();
  }

  // Clean old results when press some take picture button
  void _cleanResult() {
    imagePath = null;
    image = null;
    classification = null;
    setState(() {});
  }

  Future<void> _takePic() async {
    _cleanResult();
    final result = await imagePicker.pickImage(
      source: ImageSource.camera,
    );

    imagePath = result?.path;
    setState(() {});
    _processImage();
  }

  // Load model
  Future<void> _loadModel() async {

    interpreter = await Interpreter.fromAsset(modelPath);
    final options = InterpreterOptions();

    // Use XNNPACK Delegate
    if (Platform.isAndroid) {
      options.addDelegate(XNNPackDelegate());
    }

    // Load model from assets
    interpreter = await Interpreter.fromAsset(modelPath);
    // Get tensor input shape [1, 224, 224, 3]
    inputTensor = interpreter.getInputTensors().first;
    // Get tensor output shape [1, 1001]
    outputTensor = interpreter.getOutputTensors().first;
    setState(() {});

    print('Interpreter loaded successfully');
  }

  // Load labels from assets
  Future<void> _loadLabels() async {
    final labelTxt = await rootBundle.loadString(labelsPath);
    labels = labelTxt.split('\n');
  }

  // Process picked image
  Future<void> _processImage() async {
    if (imagePath != null) {
      // Read image bytes from file
      final imageData = File(imagePath!).readAsBytesSync();

      // Decode image using package:image/image.dart (https://pub.dev/image)
      image = img.decodeImage(imageData);
      setState(() {});

      // Resize image for model input (Mobilenet use [224, 224])
      final imageInput = img.copyResize(
        image!,
        width: 224,
        height: 224,
      );

      // Get image matrix representation [224, 224, 3]
      final imageMatrix = List.generate(
        imageInput.height,
        (y) => List.generate(
          imageInput.width,
          (x) {
            final pixel = imageInput.getPixel(x, y);
            return [pixel.r, pixel.g, pixel.b];
          },
        ),
      );

      // Run model inference
      _runInference(imageMatrix);
    }
  }

  // Run inference
  Future<void> _runInference(
    List<List<List<num>>> imageMatrix,
  ) async {
    // Set tensor input [1, 224, 224, 3]
    final input = [imageMatrix];
    // Set tensor output [1, 1001]
    final output = [List<int>.filled(1001, 0)];

    // Run inference
    interpreter.run(input, output);

    // Get first output tensor
    final result = output.first;

    // Set classification map {label: points}
    classification = <String, int>{};

    for (var i = 0; i < result.length; i++) {
      if (result[i] != 0) {
        // Set label: points
        classification![labels[i]] = result[i];
      }
    }

    setState(() {});
  }

  @override
  Widget build(BuildContext context) {
    return Scaffold(
      body: SafeArea(
        child: Center(
          child: Column(
            children: [
              Expanded(
                  child: Stack(
                alignment: Alignment.center,
                children: [
                  if (imagePath != null) Image.file(File(imagePath!)),
                  Padding(
                    padding: const EdgeInsets.all(8.0),
                    child: Row(
                      mainAxisAlignment: MainAxisAlignment.spaceBetween,
                      children: [
                        Column(
                          crossAxisAlignment: CrossAxisAlignment.start,
                          children: [
                            const Row(),
                            // Show model information
                            Text(
                              'Input: (shape: ${inputTensor?.shape} type: ${inputTensor?.type})',
                            ),
                            Text(
                              'Output: (shape: ${outputTensor?.shape} type: ${outputTensor?.type})',
                            ),
                            const SizedBox(height: 8),
                            // Show picked image information
                            if (image != null) ...[
                              Text('Num channels: ${image?.numChannels}'),
                              Text(
                                  'Bits per channel: ${image?.bitsPerChannel}'),
                              Text('Height: ${image?.height}'),
                              Text('Width: ${image?.width}'),
                            ],
                            const SizedBox(height: 24),
                            // Show classification result
                            Expanded(
                              child: SingleChildScrollView(
                                child: Column(
                                  crossAxisAlignment: CrossAxisAlignment.start,
                                  mainAxisAlignment: MainAxisAlignment.end,
                                  children: [
                                    if (classification != null)
                                      ...(classification!.entries.toList()
                                            ..sort(
                                              (a, b) =>
                                                  a.value.compareTo(b.value),
                                            ))
                                          .reversed
                                          .map(
                                            (e) => Container(
                                              padding: const EdgeInsets.all(8),
                                              color: Colors.orange
                                                  .withOpacity(0.3),
                                              child: Row(
                                                children: [
                                                  Text('${e.key}: ${e.value}'),
                                                ],
                                              ),
                                            ),
                                          ),
                                  ],
                                ),
                              ),
                            ),
                          ],
                        ),
                      ],
                    ),
                  ),
                ],
              )),
            ],
          ),
        ),
      ),
      floatingActionButton: FloatingActionButton(
        onPressed: _takePic,
        child: const Icon(
          Icons.camera_alt,
          size: 64,
        ),
      ),
    );
  }
}

Are there any set of installation instructions I didn't follow? I read something about an install.sh file I need to run to install dynamic libraries, but that didn't help as it seems like the libraries that added weren't what it was looking for. This is the file that I found which seems to be from the deprecated library.

Any help is appreciate it!!

docs: include Roadmap

Description

As a user (or potential user) of the package, I would like to know the upcoming or possible features and what the team is working on. Is there any publically available documentation or board that indicates so?

Additional context

Update Flutter package/plugin dependencies

Currently using older versions of dependencies. Sounds like this causes issues for devs who want to use the tflite plugin with their apps while using newer versions of things like ffi.

The input data tfliteType int is unsupported

I am working through this tutorial and also looking for help here.

From my main i call this function with a path to my assets folder.

Future<void> processImage(String imagePath) async {
    // Read image byte data
    final rawImage = await rootBundle.load(imagePath);

    // change it to a uint8 list
    final imageData = rawImage.buffer.asUint8List();

    // decode image
    final image = img.decodeImage(imageData);

    // Resize image to 300x300
    final imageInput = img.copyResize(
      image!,
      width: 300,
      height: 300,
    );

    // Get matrix representation
    final imageMatrix = List.generate(
      imageInput.height,
      (y) => List.generate(
        imageInput.width,
        (x) {
          final pixel = imageInput.getPixel(x, y);
          return [pixel.r, pixel.g, pixel.b];
        },
      ),
    );

    _run(imageMatrix);
  }

Here is the actual _run method:

Future<void> _run(List<List<List<num>>> imageMatrix) async {
    // Set tensor input to [1, 300, 300, 3]
    final input = [imageMatrix];

    // Set tensor output
    final output = [List<List<num>>.filled(10, List<num>.filled(4, 0))];
    // final output = List<List<double>>.filled(4, List<double>.empty());

    // inference
    debugPrint('Actual input type: ${input.runtimeType}');
    debugPrint('Actual input shape: ${input.shape}');

    debugPrint('Actual output type: ${output.runtimeType}');
    debugPrint('Actual output shape: ${output.shape}');

    _interpreter.allocateTensors();
    _interpreter.run(input, output);
    debugPrint(output.toString());
  }

The model gets loaded like this:

void _loadModel() async {
    _interpreter = await Interpreter.fromAsset(_modelFile);
    debugPrint('Expected input shape: ${_interpreter.getInputTensors().first}');
    debugPrint('Expected output shape: ${_interpreter.getOutputTensors().first}');
    debugPrint('Interpreter loaded.');
  }

And here is my output:

I/flutter ( 5507): Expected input shape: Tensor{_tensor: Pointer: address=0xee4b7320, name: normalized_input_image_tensor, type: TfLiteType.uint8, shape: [1, 300, 300, 3], data: 270000}
I/flutter ( 5507): Expected output shape: Tensor{_tensor: Pointer: address=0xee4b7120, name: TFLite_Detection_PostProcess, type: TfLiteType.float32, shape: [1, 10, 4], data: 160}
I/flutter ( 5507): Interpreter loaded.
I/flutter ( 5507): Actual input type: List<List<List<List<num>>>>
I/flutter ( 5507): Actual input shape: [1, 300, 300, 3]
I/flutter ( 5507): Actual output type: List<List<List<num>>>
I/flutter ( 5507): Actual output shape: [1, 10, 4]
E/flutter ( 5507): [ERROR:flutter/runtime/dart_vm_initializer.cc(41)] Unhandled Exception: Invalid argument(s): The input data tfliteType int is unsupported

In the image classifaction example is the input also just a List<List<List<List<num>>>> but there it gets accepted.

Pose Detection

Can you provide me example how to run pose detetction using this package

runForMultipleInputs is slower

I'm trying to run image classification on a live camera feed, and I observe slowness on runForMultipleInputs, more then 200ms for 1 image!, it's normal ?

  List<List<Object>> _runInference(ByteBuffer byteBuffer) {
    final output = {
      0: [List<List<num>>.filled(10, List<num>.filled(4, 0))],
      1: [List<num>.filled(10, 0)],
      2: [List<num>.filled(10, 0)],
      3: [0.0],
    };
    _interpreter!.runForMultipleInputs([byteBuffer], output);
    print(_interpreter?.lastNativeInferenceDurationMicroSeconds.toString()); ---> more then 200ms
    ....
   ...
  }

NOTE :

  • I remove interpreterOptions.addDelegate(GpuDelegate()) as with this option the app crash
  • I'm working on master branch, so I have the fix of listEquals

Device : Iphone 11

IOS: App crashing when building ad-hoc version for firebase App Distribution

Hi, when I run the app locally on a ios device everything works great. But if I build it with flutter build ipa --export-method=ad-hoc for firebase App Distribution and download the app, then the App crashes as soon as tflite plugin gets triggered.
In the older version of this package, I think it was 0.9.0, I didn't had any problems to run the ipa via firebase.
I would really appreciate your help.

This is the error message, which I get from firebase crashlytics:

EXC_BAD_ACCESS (KERN_INVALID_ADDRESS)

Crashed: com.apple.main-thread
0 ??? 0x1c75b2cd4 (Fehlt)
1 ??? 0x10610ccf4 (Fehlt)
2 ??? 0x10610ccf4 (Fehlt)
3 ??? 0x10610cee4 (Fehlt)
4 ??? 0x106108fd0 (Fehlt)
5 Runner 0x80d0 +[GeneratedPluginRegistrant registerWithRegistry:] + 133 (GeneratedPluginRegistrant.m:133)
6 Runner 0x8440 AppDelegate.application(:didFinishLaunchingWithOptions:) + 11 (AppDelegate.swift:11)
7 Runner 0x8708 @objc AppDelegate.application(
:didFinishLaunchingWithOptions:) ()
8 ??? 0x1cf867888 (Fehlt)
9 ??? 0x1cf866fac (Fehlt)
10 ??? 0x1cf865f88 (Fehlt)
11 ??? 0x1cf865bd4 (Fehlt)
12 ??? 0x1cf5ab600 (Fehlt)
13 ??? 0x1cf8ed918 (Fehlt)
14 ??? 0x1cf79dfa4 (Fehlt)
15 ??? 0x1cf79ddcc (Fehlt)
16 ??? 0x1cf79d97c (Fehlt)
17 ??? 0x1cf79d848 (Fehlt)
18 ??? 0x1cfe7ffa8 (Fehlt)
19 ??? 0x1cff18f98 (Fehlt)
20 ??? 0x1cf649958 (Fehlt)
21 ??? 0x1cfabc7a8 (Fehlt)
22 ??? 0x1cf71c0b8 (Fehlt)
23 ??? 0x1cf71bf28 (Fehlt)
24 ??? 0x1cf71b47c (Fehlt)
25 ??? 0x1cf71b208 (Fehlt)
26 ??? 0x1e2f75500 (Fehlt)
27 ??? 0x1e2fb451c (Fehlt)
28 ??? 0x1e2f79294 (Fehlt)
29 ??? 0x1e2fb4154 (Fehlt)
30 ??? 0x1d4929fdc (Fehlt)
31 ??? 0x1d492da5c (Fehlt)
32 ??? 0x1e2f833b0 (Fehlt)
33 ??? 0x1e2f82f4c (Fehlt)
34 ??? 0x1e2f8572c (Fehlt)
35 ??? 0x1cd425f54 (Fehlt)
36 ??? 0x1cd43232c (Fehlt)
37 ??? 0x1cd3b6270 (Fehlt)
38 ??? 0x1cd3cbba8 (Fehlt)
39 ??? 0x1cd3d0ed4 (Fehlt)
40 ??? 0x2066ce368 (Fehlt)
41 ??? 0x1cf8af3d0 (Fehlt)
42 ??? 0x1cf8af034 (Fehlt)
43 Runner 0x8880 main + 6 (AppDelegate.swift:6)
44 ??? 0x1eba38960 (Fehlt)

com.google.firebase.crashlytics.startup
0 ??? 0x209f4dadc (Fehlt)
1 ??? 0x1d492a5f4 (Fehlt)
2 ??? 0x1d492abf4 (Fehlt)
3 ??? 0x1059ef780 (Fehlt)
4 ??? 0x105a04670 (Fehlt)
5 ??? 0x1d4929fdc (Fehlt)
6 ??? 0x1d492b828 (Fehlt)
7 ??? 0x105a041d0 (Fehlt)
8 ??? 0x1059d4474 (Fehlt)
9 ??? 0x105a040c8 (Fehlt)
10 ??? 0x1059e7d68 (Fehlt)
11 ??? 0x1059e7430 (Fehlt)
12 ??? 0x105a03138 (Fehlt)
13 ??? 0x105a0289c (Fehlt)
14 FBLPromises 0x8c48 (Fehlt UUID a9e2820bb798337ca26c3e580903c949)
15 FBLPromises 0x84f0 (Fehlt UUID a9e2820bb798337ca26c3e580903c949)
16 ??? 0x1d49284b4 (Fehlt)
17 ??? 0x1d4929fdc (Fehlt)
18 ??? 0x1d4931774 (Fehlt)
19 ??? 0x1d49321e0 (Fehlt)
20 ??? 0x1d493ce10 (Fehlt)
21 ??? 0x21a3afdf8 (Fehlt)
22 ??? 0x21a3afb98 (Fehlt)

com.apple.UIKit.KeyboardManagement
0 ??? 0x209f4e680 (Fehlt)
1 ??? 0x1d492a9cc (Fehlt)
2 ??? 0x1d492a780 (Fehlt)
3 ??? 0x1d4939860 (Fehlt)
4 ??? 0x1d493940c (Fehlt)
5 ??? 0x1cf8af6a4 (Fehlt)
6 ??? 0x1cd3c4704 (Fehlt)
7 ??? 0x1cd370b6c (Fehlt)
8 ??? 0x1c77d1b08 (Fehlt)
9 ??? 0x1c77a2ef0 (Fehlt)
10 ??? 0x1c7d7c2e4 (Fehlt)
11 ??? 0x21a410f1c (Fehlt)
12 ??? 0x21a403fb4 (Fehlt)
13 ??? 0x1d492a05c (Fehlt)
14 ??? 0x1d4947f58 (Fehlt)
15 ??? 0x1d493156c (Fehlt)
16 ??? 0x1d4932214 (Fehlt)
17 ??? 0x1d493ce10 (Fehlt)
18 ??? 0x21a3afdf8 (Fehlt)
19 ??? 0x21a3afb98 (Fehlt)

com.google.firebase.crashlytics.ios.binary-images
0 ??? 0x209f4e974 (Fehlt)
1 ??? 0x1cd403614 (Fehlt)
2 ??? 0x1cd3e51f8 (Fehlt)
3 ??? 0x1cd3abd84 (Fehlt)
4 ??? 0x1cd39e20c (Fehlt)
5 ??? 0x1cd3e5090 (Fehlt)
6 ??? 0x1cd3aa324 (Fehlt)
7 ??? 0x1c7749a80 (Fehlt)
8 ??? 0x1c7749a08 (Fehlt)
9 ??? 0x1c77499d4 (Fehlt)
10 ??? 0x1059d6d18 (Fehlt)
11 ??? 0x1059d6b70 (Fehlt)
12 ??? 0x1059d67c0 (Fehlt)
13 ??? 0x1d49284b4 (Fehlt)
14 ??? 0x1d4929fdc (Fehlt)
15 ??? 0x1d4931694 (Fehlt)
16 ??? 0x1d49321e0 (Fehlt)
17 ??? 0x1d493ce10 (Fehlt)
18 ??? 0x21a3afdf8 (Fehlt)
19 ??? 0x21a3afb98 (Fehlt)

com.apple.uikit.eventfetch-thread
0 ??? 0x209f4db48 (Fehlt)
1 ??? 0x209f60008 (Fehlt)
2 ??? 0x209f60248 (Fehlt)
3 ??? 0x209f4e08c (Fehlt)
4 ??? 0x1cd3caaf0 (Fehlt)
5 ??? 0x1cd3cbd34 (Fehlt)
6 ??? 0x1cd3d0ed4 (Fehlt)
7 ??? 0x1c7772334 (Fehlt)
8 ??? 0x1c777221c (Fehlt)
9 ??? 0x1cf9e433c (Fehlt)
10 ??? 0x1c778b808 (Fehlt)
11 ??? 0x21a3b06cc (Fehlt)
12 ??? 0x21a3afba4 (Fehlt)

Thread
0 ??? 0x209f4e050 (Fehlt)
1 ??? 0x21a3afe44 (Fehlt)
2 ??? 0x21a3afb98 (Fehlt)

Thread
0 ??? 0x209f4e050 (Fehlt)
1 ??? 0x21a3afe44 (Fehlt)
2 ??? 0x21a3afb98 (Fehlt)

Thread
0 ??? 0x209f4e050 (Fehlt)
1 ??? 0x21a3afe44 (Fehlt)
2 ??? 0x21a3afb98 (Fehlt)

com.google.firebase.crashlytics.MachExceptionServer
0 ??? 0x1059fe39c (Fehlt)
1 ??? 0x1059fd65c (Fehlt)
2 ??? 0x1059fd7ac (Fehlt)
3 ??? 0x1059fd53c (Fehlt)
4 ??? 0x1059eebe8 (Fehlt)
5 ??? 0x1059f2ce8 (Fehlt)
6 ??? 0x1059f28d0 (Fehlt)
7 ??? 0x1059f2514 (Fehlt)
8 ??? 0x21a3b06cc (Fehlt)
9 ??? 0x21a3afba4 (Fehlt)

The 'Pods-Runner' target has transitive dependencies that include statically linked binaries: (TensorFlowLiteSwift)

I followed the iOS Initial setup to placed TensorFlowLiteC.framework in ~/.pub-cache/hosted/pub.dartlang.org/tflite_flutter-<plugin-version>/ios/,
then ran pod update in the ios folder and output " The 'Pods-Runner' target has transitive dependencies that include statically linked binaries: (TensorFlowLiteSwift)" .

What am I doing wrong?

Doctor summary (to see all details, run flutter doctor -v):
[✓] Flutter (Channel stable, 3.10.0, on macOS 13.3.1 22E772610a darwin-arm64, locale
zh-Hans-CN)
[✓] Android toolchain - develop for Android devices (Android SDK version 33.0.0)
[✓] Xcode - develop for iOS and macOS (Xcode 14.2)
[✓] Chrome - develop for the web
[✓] Android Studio (version 2022.2)
[✓] VS Code (version 1.78.2)

get_signature_runner(...) feature

Hi,
I have a model with multiple inputs and outputs. When I run the model in python, I use the signature runner

interpreter = tf.lite.Interpreter(model_path=tflite_file)
interpreter.allocate_tensors()
fn = interpreter.get_signature_runner('serving_default')

This nicely helps me to map my inputs and outputs. In flutter-tflite this feature does not seem to be present (I am new to flutter/dart). When I use the

interpreter.runForMultipleInputs(inputs, outputs)

I need to know the order of the input and output list. When I run

interpreter.getOutputIndex('random_name')

I get

Invalid argument(s): Output error: random_name' is not a valid name for any output. Names of outputs and their indexes are {StatefulPartitionedCall:10: 0, StatefulPartitionedCall:3: 1, StatefulPartitionedCall:0: 2, StatefulPartitionedCall:2: 3, StatefulPartitionedCall:9: 4, StatefulPartitionedCall:8: 5, StatefulPartitionedCall:13: 6, StatefulPartitionedCall:12: 7, StatefulPartitionedCall:5: 8, StatefulPartitionedCall:1: 9, StatefulPartitionedCall:4: 10, StatefulPartitionedCall:11: 11, StatefulPartitionedCall:6: 12, StatefulPartitionedCall:7: 13}

However, the "StatefulPartitionedCall:X" are not what corresponds to my output names in my python code.
I don't want to get the mapping correct, just by trying. Any suggestions, how I can solve this with the available feature set or how I access the "signature runner" from dart?

Thank you!

I don't know the cause of the "GPU delegate" error that occurs when performing inference.

I am very happy to see that this library has been updated recently.
Thank you so much.

https://github.com/tensorflow/flutter-tflite/tree/main/example/image_classification_mobilenet

Based on the above image inference sample application, you want to run an inference application using your own training data.
So, when I replace the training model and run the inference, I get the following error

Following operations are not supported by GPU delegate:
MUL: MUL requires one tensor that not less than second in all dimensions.
25 operations will run on the GPU, and the remaining 40 operations will run on the CPU.
TfLiteMetalDelegate Prepare: Failed to allocate id<MTLBuffer>
Node number 65 (TfLiteMetalDelegate) failed to prepare.
Restored original execution plan after delegate application failure.

What is the cause?
Is it the model, or do I need to set additional "ops" settings when converting to ".tflite"?

Do I need any settings for both android and IOS?

The Python code for the ".tflite" conversion was taken directly from the TensorFlow documentation. For reference.

https://www.tensorflow.org/lite/guide/ops_select#convert_a_model

We are very aware that this is in the development stage, but we sincerely look forward to your response.


【reference】

  • execution environment
// flutter doctor
[✓] Flutter (Channel stable, 3.10.0, on macOS 13.3.1 22E772610a darwin-arm64, locale ja-JP)
[✓] Android toolchain - develop for Android devices (Android SDK version 33.0.2)
[✓] Xcode - develop for iOS and macOS (Xcode 14.2)
[✓] Chrome - develop for the web
[✓] Android Studio (version 2022.1)
[✓] VS Code (version 1.78.2)
[✓] Connected device (5 available)

How to use multiple Output Tensors?

I have an OD model which 4 output tensors. However I am unsure how should I give them to the loaded interpreter.
Lets take a look at the code of interpreter.dart

void run(Object input, Object output) {
    var map = <int, Object>{};
    map[0] = output;
    runForMultipleInputs([input], map);
  }

  /// Run for multiple inputs and outputs
  void runForMultipleInputs(List<Object> inputs, Map<int, Object> outputs) {
    if (outputs.isEmpty) {
      throw ArgumentError('Input error: Outputs should not be null or empty.');
    }
    runInference(inputs);
    var outputTensors = getOutputTensors();
    for (var i = 0; i < outputTensors.length; i++) {
      outputTensors[i].copyTo(outputs[i]!);
    }
  }

So we can see that the output fed into model is assigned to the first field of a Map<int, Object>. So then in runForMultipleInputs in the for-loop this outputTensors[i].copyTo(outputs[i]!) runs into a null access exception.
This is how I feed the output to the model.

    final output0 = List<List<double>>.filled(1, List<double>.filled(25, 0));
    final output1 = List<List<List<double>>>.filled(1, List<List<double>>.filled(25, List<double>.filled(4, 0)));
    final output2 = List<double>.filled(1, 0);
    final output3 = List<List<double>>.filled(1, List<double>.filled(25, 0));
    Map<int, Object> output = {
      0: output0,
      1: output1,
      2: output2,
      3: output3};
    print(output);
    interpreter.run(image,output );

So far I only made it working by just casting the input object to Map. How is this supposed to be used anyway?

  void run(Object input, Object output) {
    var map = <int, Object>{};
    map[0] = output;
    runForMultipleInputs([input], output as Map<int, Object> );
  }

Cannot run example app

I cannot run example app. Seems like main.dart/all the necessary files for example have been deleted in this commit.

@PaulTR can't get my head around, is there any reason this was done?

Building for iOS Simulator, but linking in object file built for iOS

Error (Xcode): Building for iOS Simulator, but linking in object file built for iOS, file '/Users/jithinrobert/.pub-cache/hosted/pub.dev/tflite_flutter-0.9.0/ios/TensorFlowLiteC.framework/TensorFlowLiteC' for architecture arm64

Getting this error in ios, I added TensorFlowLiteC_framework to pub cache

Remove `assets` prefix when load model from assets

The method of loading the model from the assets already contains the asset name explicitly, so you shouldn't put it yourself. Try removing assets from the path to your model.

Captura de pantalla 2023-05-15 a la(s) 10 04 35 p m

Try in you code:
After: "assets/assets/model/sr_model.tflite"
Before: "assets/model/sr_model.tflite"

Originally posted by @yeikel16 in #23 (comment)

Adding the assets prefix when loading the model causes confusion #23. Also, you may want to place the model in another folder than assets. Maybe in the models folder as example.

Fill in Wiki

Wiki has been readded, need to move documentation over to cover how the plugin works

Issue with TensorflowLite Interpreter in IOS release build

I am passing the tflite file to the interpreter but it returns nothing. This is only happening when I download the IOS build from a test flight. Running an app with IDE works fine.

tfl.Interpreter? interpreter = await tfl.Interpreter.fromAsset('assets/config/mobilefacenet.tflite');

I have also tried

tfl.Interpreter.fromFile(File(path));

But this also doesn't return anything.

How I am testing: I have added a stack view with a Text widget on that and added logs into a string and print the text to check which line is executed. I am not getting any error even if I add this interpreter into try-catch. I have added a text before assigning the value to the interpreter and after the value is assigned to the interpreter only the before-added text shows not after one. This is why I am assuming it got stuck while assigning and didn't return anything.

Empty predictions

I have trained a custom model using yolov5s and converted that model to tflite. I was able to run the model without any errors but no predictions are present in the output. I suspect it may be due to some issue with converting the image to a float32 list input. here are my input and output tensor shapes.
[1, 640, 640, 3] [1, 25200, 6]
these are what neutron app display of the model.
name: serving_default_input_1:0 type: float32[1,640,640,3] location: 0
name: StatefulPartitionedCall:0 type: float32[1,25200,6] location: 532

here is my image to List<List<List>> function

Future<List<List<List<double>>>> processImage(var imagePath) async {
    final rawImage = await File(imagePath.path).readAsBytes();
    // decode image
    final image = img.decodeImage(rawImage);
    // Resize image to 640x640
    final imageInput = img.copyResize(
      image!,
      width: 640,
      height: 640,
    );
    // Get matrix representation
    final imageMatrix = List.generate(
      imageInput.height,
      (y) => List.generate(
        imageInput.width,
        (x) {
          final pixel = imageInput.getPixel(x, y);
          return [
            (img.getRed(pixel) - 100) / 127.5,
            (img.getGreen(pixel) - 100) / 127.5,
            (img.getBlue(pixel) - 100) / 127.5
          ];
        },
      ),
    );
    return imageMatrix;
  }

I'm not sure this is the right way to convert input image data for the model.

[√] Windows Version (Installed version of Windows is version 10 or higher)
[√] Android toolchain - develop for Android devices (Android SDK version 30.0.3)
[√] Chrome - develop for the web
[√] Visual Studio - develop for Windows (Visual Studio Community 2022 17.1.0)
[!] Android Studio (not installed)
[√] VS Code (version 1.79.2)
[√] Connected device (4 available)
    ! Device emulator-5556 is offline.
[√] Network resources```

Camera

Will you make a sample program with a camera?
Thank.

Error in Loading Model

I'm trying to load the model for a day. I tried it around 6 months ago for object detection but that time package didn't support YOLO V8. Now I'm trying to upload tflite mode of esrgan-tf2 super image resolution. But the model didn't load error is coming. I double-checked everything like imports, pub, and address but I'm unable to load it. Can u please make an example that how to work with super image model

CONSOLE

I/Timeline(32066): Timeline: Activity_launch_request time:451900404
I/GED (32066): ged_boost_gpu_freq, level 100, eOrigin 2, final_idx 27, oppidx_max 27, oppidx_min 0
V/PhoneWindow(32066): DecorView setVisiblity: visibility = 4, Parent = android.view.ViewRootImpl@d044a7d, this = DecorView@333fd72[MainActivity]
2
V/PhoneWindow(32066): DecorView setVisiblity: visibility = 0, Parent = android.view.ViewRootImpl@d044a7d, this = DecorView@333fd72[MainActivity]
[log] ++++++++++ Process Image +++++++
D/skia (32066): SkJpegCodec::onGetPixels +
D/skia (32066): SkJpegCodec::onGetPixels -
I/flutter (32066): Error processing image: Unable to load asset: "assets/assets/model/sr_model.tflite".
I/flutter (32066): The asset does not exist or has empty data.

Model is not predicting as expected

I tried using my custom model which is a binary image classification model using tflite_flutter package but it is predicting every image with incorrect label and confidence. I checked with colab, the model is working fine. The model's input type is Float32 and input shape of the image is 28X28. Please help me in resolving the issue here.

Extract class and score from result

Hi,

I'm able to run my tflite model using this amazing plugin, but I'm not able to extract result, I'm using the same image_classification_mobilenet example with small update of output to match my model

my code is :

final input = [imageMatrix];
final output = [
  List.generate(
    21,
    (index) => List.filled(7098, 0.0),
  )
];

// Run inference
interpreter.run(input, output);

final result = output.first;

when I debug the result variable the value is :

index 0 : [1.4157819747924805, 2.1544673442840576, ...... ] : size = 7098

index 1 : [1.805556058883667, 1.946408987045288, ........ ] : size = 7098

index 2 : [1.0974576473236084, 1.980888843536377, ....... ] : size = 7098
...
...
index 20 : [0.0024664357770234, 0.007526415400207, ...... ] : size = 7098

my question : how I can extract the label and score from the result ?

NOTE : the same model with same image as test give me class 3 and score of 0.88 in original python code
Thanks

Use isolates to run inference

When inference is running in models that take a long time. The UI it freezes/locked because everything runs on a single thread.

Consider the possibility of implementing Isolate.spawn to run the model. The developer could do this externally, but it would be better if this process is handled within the package. The run method could return the execution status in a stream [loading, done]

Add CD/CI

CI/CD is one of the most important things in OSS. It ensures quality and also avoids issues.

Tasks

  • Define what service will we use.
    My recommendation is GitHub Actions, most open-source projects use it, It's flexible and it's scalable.
  • Define CI pipeline.
    I'd add the most common tasks like run linter, tests, compile, and that stuff
  • Define CD pipeline
    Maybe we can add a task to deploy to pub.dev when a new git tag is added

Сosine similarity

Help me, please, can I use this library to calculate the distance between the feature vectors of two images? In python, I can do this, for example, with this code:

from tflite_support.task import vision

# Initialization.
image_embedder = vision.ImageEmbedder.create_from_file('feature_vector_metadata_1.tflite')

# Run inference on two images.
image_1 = vision.TensorImage.create_from_file('same1.jpg')
result_1 = image_embedder.embed(image_1)
image_2 = vision.TensorImage.create_from_file('same2.jpg')
result_2 = image_embedder.embed(image_2)

# Compute cosine similarity.
feature_vector_1 = result_1.embeddings[0].feature_vector
feature_vector_2 = result_2.embeddings[0].feature_vector
similarity = image_embedder.cosine_similarity(feature_vector_1, feature_vector_2)

print(similarity)

but here I do not see direct access to the ImageEmbedder API. Is it even possible?

Major task: Update TFLite version

TFLite for Android can use gradle dependencies rather than local build files. This will require changing the Dart classes along with the Android and iOS examples to work with the latest TFLite version.

Failed to lookup symbol in Archived (ios) build

While trying to run the app in archive mode (downloaded from testflight) on iOS, getting this below error -

Failed to lookup symbol 'TfLiteModelCreateFromFile': dlsym (RTLD_DEFAULT, TfLiteModelCreateFromFile): symbol not found

Can anyone please help into this?

using latest version of the plugin:
tflite_flutter: ^0.9.5

how to use the deeplabv3 model?

Could you assist me or provide an example of how to use the deeplabv3 model? I am unable to find a way to obtain or set a label map file or values.

Furthermore, I have normalised input values like "(color - 127.5) / 127.5". However, I am unsure about retrieving image pixels from the output as the last element returns 21 values. [1, 257, 257, 21]

We need new versions of tflite binaries

To use the plugin, an initial setup, which is marked important in the README, is needed. However, the links in the install.sh and install.bat still refer to the origin releases, and they are already expired. We need the latest versions of tflite binaries as releases, to replace the url in the initial setup scripts.

And also, we need to publish a new wiki to guide the users to build their own versions of tflite binaries. I've successfully done this, but there's still something baffle me, especially the usage of bazzel.

Expand Dims in Flutter-TfLite for Image Classification

How to expand_dims in flutter tflite

I am making an image classification app.

My tflite model has input shape = (1,244,244,3)
After reading image from image picker, and load as TensorImage, my image shape is (244,244,3).

When I put the image to run in the interpreter

_model.interpreter.run(inputTensorImage.buffer, outputBuffer.buffer);

Following error occurs.
StateError (Bad state: failed precondition)

How can I change my image dimension (244,244,3) into (1,244,244,3) in flutter-tflite?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.