Coder Social home page Coder Social logo

ios-demo-app's Introduction

PyTorch iOS Example Apps

A list of iOS apps built on the powerful PyTorch Mobile platform.

HelloWorld

HelloWorld is a simple image classification application that demonstrates how to use PyTorch C++ libraries on iOS. The code is written in Swift and uses Objective-C as a bridge.

HelloWorld-Metal

HelloWorld-Metal is a simple image classification application that demonstrates how to use PyTorch C++ libraries with Metal support on iOS GPU. The code is written in Swift and uses Objective-C as a bridge.

PyTorch demo app

The PyTorch demo app is a full-fledged app that contains two showcases. A camera app that runs a quantized model to classifiy images in real time. And a text-based app that uses a text classification model to predict the topic from the input text.

Image Segmentation

Image Segmentation demonstrates a Python script that converts the PyTorch DeepLabV3 model for mobile apps to use and an iOS app that uses the model to segment images.

Object Detection

Object Detection demonstrates how to convert the popular YOLOv5 model and use it on an iOS app that detects objects from pictures in your photos, taken with camera, or with live camera.

Neural Machine Translation

Neural Machine Translation demonstrates how to convert a sequence-to-sequence neural machine translation model trained with the code in the PyTorch NMT tutorial and use the model in an iOS app to do French-English translation.

Question Answering

Question Answering demonstrates how to convert a powerful transformer QA model and use the model in an iOS app to answer questions about PyTorch Mobile and more.

Vision Transformer

Vision Transformer demonstrates how to use Facebook's latest Vision Transformer DeiT model to do image classification, and how convert another Vision Transformer model and use it in an iOS app to perform handwritten digit recognition.

Speech recognition

Speech Recognition demonstrates how to convert Facebook AI's wav2vec 2.0, one of the leading models in speech recognition, to TorchScript and how to use the scripted model in an iOS app to perform speech recognition.

Streaming Speech recognition

Streaming Speech Recognition demonstrates how to use the more advanced iOS AVAudioEngine to perform live audio processing and a new torchaudio pipeline to perform streaming speech recognition.

Video Classification

TorchVideo demonstrates how to use a pre-trained video classification model, available at the newly released PyTorchVideo, on iOS to see video classification results, updated per second while the video plays, on tested videos, videos from the Photos library, or even real-time videos.

LICENSE

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

ios-demo-app's People

Contributors

binpord avatar brettkoonce avatar facebook-github-bot avatar husthyc avatar jeffxtang avatar jmdetloff avatar khiemauto avatar mthrok avatar xta0 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ios-demo-app's Issues

In Release mode any module outputs always same

🐛 Bug

In PyTorch iOS HelloWorld proj, if you just change Build Configuration to Release,the module output always same value.

To Reproduce

Steps to reproduce the behavior:

  1. Download PyTorch HelloWord proj from https://github.com/pytorch/ios-demo-app/tree/master/HelloWorld.
  2. Insert different image and model to proj.
  3. Create different pixelBuffer from inserted image, print result for module.predict(image: UnsafeMutableRawPointer(&pixelBuffer)) func, we can find outputs value is different.
  4. In Xcode IDE, select Edit Scheme... -> Run -> Info -> Build Configuration from Debug to Release.
  5. just print return value for module.predict(image: UnsafeMutableRawPointer(&pixelBuffer)) again, we can find all outputs value is same.
  6. I also uploaded my test proj, just simply changed sth. from HelloWord, you can find it here: https://github.com/darkThanBlack/MOONPyTorchIssue, you may need change Signing Certificate or other configs to run it.

Other Info

  1. You can find my sample code in ViewController.m, Line 24 to Line 67...
  2. I tried change .pt and predict implement(I have another private proj to do some face recognize and it works well in Debug mode, I can't upload it because relate some user info), no matter what things I input, outputs seemd a static value.
  3. I also tried change all proj Build Settings config releated to Debug/Release but can't fix it.
  4. I also tried build PyTorch Libraries from source but still can't fix it , I tried read PyTorch source code but not sure how Xcode proj's Debug/Release affect .a libraries...
  5. Here is some hint image for help...
    hint02

hint03

Environment

  • Setup PyTorch with pod 'LibTorch-Lite', '~> 1.9.0' or build from source.
  • OS : Apple M1, iPhone 12(iOS 14.8);Other Mac/iOS OS is reproducible too.

Linking issues for nightly torch on iOS

Hi, I followed the tutorial at https://pytorch.org/mobile/ios/#xcode-setup.

I cloned the torch Github repo and checked out the "nightly" branch.

SELECTED_OP_LIST=model.yaml BUILD_PYTORCH_MOBILE=1 IOS_ARCH=arm64 ./scripts/build_ios.sh
Then copied the static library and headers into the PyTorchDemo directory
cp -r build_ios/install /path/toPyTorchDemo/
Then added this to "Other Linker Flags"
-force_load $(PROJECT_DIR)/install/lib/libtorch.a
Then added this to "Header Search Paths"
$(PROJECT_DIR)/install/include/
I also disabled Bitcode.
I also had to switch "include LibTorch/LibTorch.h" to be "include "torch/script.h" in TorchModule.mm
However, when I try and build the project I get linking issues, e.g.
Undefined symbol: typeinfo for c10::AutogradMetaInterface
Am I missing a step here?

Can you provide another "UIImage+Helper" file coded by " Object-C version"?

Dear Developers,
I'm a newer in swift programing, my project is all developed by object -c.
Besides,I find that the most important part,"TorchModule", which uses “LibTorch” ,is also coded by "object-c".
So it will be better if it has another "UIImage+Helper" file coded in "object-c" version.(especially the "normalized" method coded by "object-c" in file)
Thanks!

library not found for -lPods-PyTorchDemo

Got the following error trying to run the code:

Showing Recent Messages

Build target PyTorchDemo of project PyTorchDemo with configuration Debug

Ld /Users/gvs/Library/Developer/Xcode/DerivedData/PyTorchDemo-brnftyiccqnynwcjudbbqwtswfct/Build/Products/Debug-iphoneos/PyTorchDemo.app/PyTorchDemo normal arm64 (in target 'PyTorchDemo' from project 'PyTorchDemo')
cd /Users/gvs/dev/ios/ios-demo-app/PyTorchDemo
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/clang++ -target arm64-apple-ios12.4 -isysroot /Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/SDKs/iPhoneOS13.6.sdk -L/Users/gvs/Library/Developer/Xcode/DerivedData/PyTorchDemo-brnftyiccqnynwcjudbbqwtswfct/Build/Products/Debug-iphoneos -L/Users/gvs/dev/ios/ios-demo-app/PyTorchDemo/Pods/LibTorch/install/lib -F/Users/gvs/Library/Developer/Xcode/DerivedData/PyTorchDemo-brnftyiccqnynwcjudbbqwtswfct/Build/Products/Debug-iphoneos -filelist /Users/gvs/Library/Developer/Xcode/DerivedData/PyTorchDemo-brnftyiccqnynwcjudbbqwtswfct/Build/Intermediates.noindex/PyTorchDemo.build/Debug-iphoneos/PyTorchDemo.build/Objects-normal/arm64/PyTorchDemo.LinkFileList -Xlinker -rpath -Xlinker @executable_path/Frameworks -dead_strip -Xlinker -object_path_lto -Xlinker /Users/gvs/Library/Developer/Xcode/DerivedData/PyTorchDemo-brnftyiccqnynwcjudbbqwtswfct/Build/Intermediates.noindex/PyTorchDemo.build/Debug-iphoneos/PyTorchDemo.build/Objects-normal/arm64/PyTorchDemo_lto.o -Xlinker -export_dynamic -Xlinker -no_deduplicate -stdlib=libc++ -fobjc-arc -fobjc-link-runtime -L/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/lib/swift/iphoneos -L/usr/lib/swift -Xlinker -add_ast_path -Xlinker /Users/gvs/Library/Developer/Xcode/DerivedData/PyTorchDemo-brnftyiccqnynwcjudbbqwtswfct/Build/Intermediates.noindex/PyTorchDemo.build/Debug-iphoneos/PyTorchDemo.build/Objects-normal/arm64/PyTorchDemo.swiftmodule -ObjC -lc++ -lc10 -lclog -lcpuinfo -leigen_blas -lnnpack -lpytorch_qnnpack -lstdc++ -ltorch -force_load /Users/gvs/dev/ios/ios-demo-app/PyTorchDemo/Pods/LibTorch/install/lib/libtorch.a -lPods-PyTorchDemo -Xlinker -dependency_info -Xlinker /Users/gvs/Library/Developer/Xcode/DerivedData/PyTorchDemo-brnftyiccqnynwcjudbbqwtswfct/Build/Intermediates.noindex/PyTorchDemo.build/Debug-iphoneos/PyTorchDemo.build/Objects-normal/arm64/PyTorchDemo_dependency_info.dat -o /Users/gvs/Library/Developer/Xcode/DerivedData/PyTorchDemo-brnftyiccqnynwcjudbbqwtswfct/Build/Products/Debug-iphoneos/PyTorchDemo.app/PyTorchDemo

ld: library not found for -lPods-PyTorchDemo
clang: error: linker command failed with exit code 1 (use -v to see invocation)

Bad ImageView X/Y scaling factors in Object Detection sample project

Hi,

I found an issue with the ImageView scaling factors for both X and Y axis in file:
pytorch/ios-demo-app/blob/master/ObjectDetection/ObjectDetection/ViewController.swift

line 38 and 39 current code:

let ivScaleX : Double = (image!.size.width > image!.size.height ? Double(imageView.frame.size.width / imageView.image!.size.width) : Double(imageView.image!.size.width / imageView.image!.size.height))
let ivScaleY : Double = (image!.size.height > image!.size.width ? Double(imageView.frame.size.height / imageView.image!.size.height) : Double(imageView.image!.size.height / imageView.image!.size.width))

And here is a fix:

let ivScaleX: Double = (image!.size.width > image!.size.height ? Double(imageView.frame.size.width / imageView.image!.size.width) : Double(imageView.frame.height / imageView.image!.size.height))
let ivScaleY: Double = (image!.size.height > image!.size.width ? Double(imageView.frame.size.height / imageView.image!.size.height) : Double(imageView.frame.width / imageView.image!.size.width))

Weird Image classification results ONLY on iOS, using ios-demo-app framework

Dear pytorch geeks, I got a weird Image Classification result using this ios-demo-app xcodeproject (both HelloWorld and also Pytorch Demo), please figure me out where is the issue. thanks!!!

what i did:

  • train a dataset (40 classifications) with Pytorch 1.5.0 with Transfer Learning (with resnet18), a traced 'mymodel.pt' is generated. acc=99%
  • tested with some input images (e.g.FigureA.jpg) with traced 'mymodel.pt' , the image classification is perfect!
    issue is coming->?>?>?>?>?>?>?>
  • replace the original mobilenet v2 .pt and words.txt in the HelloWorld project with my traced 'mymodel.pt' and 'mywords.txt'
  • the classification result. (e.g. FigureA.jpg) is completely wrong on simulator and iPhone device
    However, the classificaiton of FigureA.jpg is correct on my MacOS.
    How it is possible ?
    this issue blocks me since 1 month ago.

thanks a lot !!

Library Not Found for -lPods-HelloWorld

SDK simulator: IOS 13.0, iPhone 8
Pytorch Version: 1.5.0
Follow the tutorial in github but got error when I compile the demo.


ld: library not found for -lPods-HelloWorld
clang: error: linker command failed with exit code 1 (use -v to see invocation)

let module = TorchModule(fileAtPath: filePath) failed

pip install torchvision
python trace_model.py
pod install

work well, but when I select an iOS simulator and launch it, "let module = TorchModule(fileAtPath: filePath)" failed to run. I can be sure that filePath is right.

thanks

Having EXC_BAD_ACCESS in inferenceModule line 52 [Resolved]

Hello there,
I just tested the ios_demo_app ObjectDetection project on XCode, with a custom model, and I noticed a weird bugg.
For the first image (the dog, with the bicycle and the truck), the inference worked. But with certain images (like the second one), it crashes with the error EXC_BAD_ACCESS. I also noticed that running twice the inference on the first image causes also the error (so I'm guessing this has something to do with some configuration). I've seen some people with similar issues in the release phase (people switched some configurations to be the same as in debug mode to solve it), but in my case it also happens on debug.
I was wondering if anyone had any idea of why I'm facing this issue, or if anyone had faced it.
I am using XCode.
Thank you

Can `TorchModule` run in a React Native iOS app or does it need to be wrapped in Swift?

Hi all,
first of all, thanks a lot for this great demo app.
Really useful to get going.

I am a total newbie in iOS development so forgive me for the level of the questions.
I was wondering if the same type of ML inference (on top of a TorschScript model) could work inside a React Native iOS app, instead of being wrapped around Swift code.

Thanks and have a great day ahead

iOS Issue - pytorch 1.60

Unknown builtin op: aten::mul.
Could not find any similar ops to aten::mul. This op may not exist or may not be currently supported in TorchScript.
:
File "", line 3

def mul(a : float, b : Tensor) -> Tensor:
return b * a
~~~~~ <--- HERE
def add(a : float, b : Tensor) -> Tensor:
return b + a

Swift UI framework for object detection D2Go

May I check if the object detection code in D2Go is runnable over the swift UI framework? I modified the existing D2GO project by removing delegate, storyboard files and created new ones for swift UI framework. The backend codes in Inference and Utils folder remain unchanged. When I tried to load a picture and run the model inference, the model does not give me the correct outputs.

Memory Leak for PyTorchDemo with Metal

Hi, I'm trying to create a demo Image Classification app from the PyTorchDemo Image Classification code that run on the GPU with Metal. By referencing the HelloWorld-Metal and various PyTorch documentation, I did some modification:

Change the Podfile to this:

platform :ios, '14.3'
target 'PyTorchDemo' do
  pod 'LibTorch-Lite-Nightly'
end

And run pod update

Then I changed TorchBridge/TorchModule.mm to this:

#import "TorchModule.h"
#import <LibTorch-Lite-Nightly/LibTorch-Lite.h>

@implementation TorchModule {
 @protected
  torch::jit::mobile::Module _impl;
}

- (nullable instancetype)initWithFileAtPath:(NSString*)filePath {
  self = [super init];
  if (self) {
    try {
      c10::InferenceMode mode;
      _impl = torch::jit::_load_for_mobile(filePath.UTF8String);
    } catch (const std::exception& exception) {
      NSLog(@"%s", exception.what());
      return nil;
    }
  }
  return self;
}

@end

@implementation VisionTorchModule

- (NSArray<NSNumber*>*)predictImage:(void*)imageBuffer {
  try {
      float* floatBuffer;
      {
        c10::InferenceMode mode;
        torch::autograd::AutoGradMode guard(false);
        at::Tensor tensor = torch::from_blob(imageBuffer, {1, 3, 256, 256}, at::kFloat).metal();
        auto outputTensor = _impl.forward({tensor}).toTensor().cpu();
        floatBuffer = outputTensor.data_ptr<float>();
      }
    if (!floatBuffer) {
      return nil;
    }
    NSMutableArray* results = [[NSMutableArray alloc] init];
    for (int i = 0; i < 12; i++) {
      [results addObject:@(floatBuffer[i])];
    }
    
    return [results copy];
  } catch (const std::exception& exception) {
    NSLog(@"%s", exception.what());
  }
  return nil;
}

@end

Which move the input tensor to the GPU with that .metal() call and move the result to the CPU with that .cpu() call. Noted that my input size {1, 3, 256, 256} and num classes 12 is a bit different from the original example.

The app compile fine and run fine on my iPhone 7 test device, by looking at the debug tab on XCode I can see that the GPU are indeed working, but I notice a constant increase in memory consumption overtime

image

Which does not happen with the original CPU only code.

By using the Instrument tool in XCode, I can pin point what memory are being leaks:
image

Some of them are:

MPSCNNAdd	203	< multiple >	88,81 KiB	PyTorchDemo	at::Tensor at::native::metal::binaryElementwiseMPSCNNKernel<MPSCNNAdd>(at::Tensor const&, at::Tensor const&)
MPSImage	52	< multiple >	8,94 KiB	PyTorchDemo	at::native::metal::createStaticImage(c10::ArrayRef<long long>)
MPSCNNPoolingAverage	26	< multiple >	8,53 KiB	PyTorchDemo	at::native::metal::adaptive_avg_pool2d(at::Tensor const&, c10::ArrayRef<long long>)

Which indeed come from torch mobile. The full leak report for that fews second run can be downloaded from here

I've tried using another mobilenetv2 model that was exported from torchvision but the problem still persist. Since I'm not really familiar with Swift and Objective C, I've been searching for way to delete or dereference object that I think might be get stuck behind like the input and output tensor, for ex: create separate scope, set the object to null but none had work yet.

Are there any thing that I am doing wrong here ? Any help is appreciated ;(

Thanks for checking by ;)

Why don't you use torch lite in object detection?

I see the android-demo-app repository uses torch lite in object detection example. On ios-demo-app, I see using torch without lite for object detection.
Why don't you use the lite version for both.

Can't find model (SpeechRecognition)

Hello,
I am trying to run SpeechRecognition demo but, I get the error Fatal error: Can't find the model.
I followed the document created model both with torchaudio and my model but I get the same error.

Any solutions?

I use LibTorch 1.9.0

Thanks.

'torch/csrc/jit/mobile/import.h' file not found,Lexical or Preprocessor Issue

231663052320_ pic
1.I want to build an plugin for Flutter , so i added ' s.dependency 'LibTorch-Lite', '~>1.12.0' ' in podspec ,
2.then i imported'#import <Libtorch-Lite/Libtorch-Lite.h>' but get the following error : ' 'torch/csrc/jit/mobile/import.h' file not found,Lexical or Preprocessor Issue'
What should i do? I want to transplant the ObjectDetection into Flutter Plugin to run custom Yolov5s model.

Faile to run ``BUILD_PYTORCH_MOBILE=1 IOS_ARCH=arm64 ./scripts/build_ios.sh''

10 errors generated. 10 errors generated. make[2]: *** [confu-deps/cpuinfo/CMakeFiles/cpuinfo.dir/src/arm/mach/init.c.o] Error 1 make[2]: *** [confu-deps/cpuinfo/CMakeFiles/cpuinfo_internals.dir/src/arm/mach/init.c.o] Error 1 make[1]: *** [confu-deps/cpuinfo/CMakeFiles/cpuinfo.dir/all] Error 2 make[1]: *** Waiting for unfinished jobs.... make[1]: *** [confu-deps/cpuinfo/CMakeFiles/cpuinfo_internals.dir/all] Error 2 make: *** [all] Error 2

RAFT architecture

Hello! Is the RAFT architecture available in the pytorch library for iOS? I need to find shift/delta between two images. Please tell me how can I do this?

Release Mode

Hi All,

I am unable to get the code to compile in release mode (as opposed to debug), I get EXC_BAD_ACCESS (in run scheme I changed everything from debug to release). How can I overcome this (I have my own personal app using libtorch and I cannot push it to testflight).

Thank you very much for the help, and great demo.

Object Detection link broken

"Object Detection demonstrates how to convert the popular YOLOv5 model and use it on an iOS app that detects objects from pictures in your photos, taken with camera, or with live camera."

The readme file mentions a demonstration of Yolov5 on iOS but the link seems to be broken

Is there a demo on Yolov5 conversion for iOS yet ?

ImageSegmentation shows blank image on real device

  • When I use a simulator segmentation works as expected
  • When I use a real device (iPhone X 15.3.1), what is generated a white image with a tiny black square in the corner (attached)
  • I've added breakpoints and see the pixels being added for sheep/person/dog as expected

empty-square

Bounding Box issues for iphones and ipads

The code is not giving accurate bounding boxes for phones and for ipads, especially when used to detect multiple devices, x, and y of the bounding box created is messed up, any idea on how to fix this ?

Build issue HelloWorld app

Hi!

I cannot compile the example app.
I may have overlooked something obvious since I am new to iOS development.

I downloaded and saved the model to ios-demo-app/HelloWorld/HelloWorld/HelloWorld/model/model.pt
I run pod install from ios-demo-app/HelloWorld/HelloWorld.

Then I opened the xcode project and the build failed:

Thank you for helping me!

Screenshot 2019-10-30 at 12 36 21

Xcode version: 11.1
MacOS: 10.15

inference time

Hi,

I try to evaluate the running time under different resolutions, but got strange result.

 override func viewDidLoad() {
        super.viewDidLoad()
        let image = UIImage(named: "image.png")!
        imageView.image = image
        let resizedImage = image.resized(to: CGSize(width: 224, height: 224))
        guard var pixelBuffer = resizedImage.normalized() else {
            return
        }
        let start=CACurrentMediaTime()
        guard let outputs = module.predict(image: UnsafeMutableRawPointer(&pixelBuffer)) else {
            return
        }
        let zippedResults = zip(labels.indices, outputs)
        let sortedResults = zippedResults.sorted { $0.1.floatValue > $1.1.floatValue }.prefix(3)
        var text = ""
        for result in sortedResults {
            text += "\u{2022} \(labels[result.0]) \n\n"
        }
        let end=CACurrentMediaTime()

        print("elapsed:\(end-start)")
        resultView.text = text
    }

I add the timing code and change the resize resolution to 224x224/512x512/1024x1024. The results is as fellows.

224x224: 0.2193204581271857
512x512: 0.17997654154896736
1024x1024: 0.18111812486313283

Can anybody help to solve the problem? Thanks!

error during build

@asmeurer @asuhan @kostmo @Yangqing @xuhdev Hi when i follow the steps of installation and at the end when i use the command swift--version is get the following " python-swiftclient 3.8.1" can i run the HelloWorld.xcworkspace this using visual studio??

thanks in advance

Build succeeded but "Fatal error: Can't find the model file!"

I have just ran the Hello World and Speech Recognition code, following all of the steps. The build was successful but after that the Hello World code did not show up the picture and the Speech Recognition code did not return the word I said (although the interface with "Start" and "Listening" still showed up)

I also checked the model file and it was at the right folder with the ViewController

The full output is "SpeechRecognition[47748:2353702] SpeechRecognition/ViewController.swift:36: Fatal error: Can't find the model file!"

Fatal error: Failed to load model!

I was deploying the official code in ios-demo-app/ObjectDetection on my iPhone(ios 14.0).

I have copied the result of the export.py (yolov5s.torchscript.pt) to the ios-demo-app/ObjectDetection/ObjectDetection folder. Then It was built successfully but when I tried to run the model, this error occured to me.

Same error happened when I tried the ios-demo-app/PytorchDemo code. I am wondering if 'Bundle.main.path' can correctly open this .pt file.

WeChat388030beb7d97de3e6f42d09cb5fd50e

Expected Tensor but got Tuple debug_handle:-1

Hi,
I was able to export the yolov5s model to tflite format as suggested using export.py. However, when I build on mac, I get the following errors:

[W TensorImpl.h:1156] Warning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (function operator())
2021-11-09 13:49:14.041245-0800 ObjectDetection[89695:352534] Expected Tensor but got Tuple

debug_handle:-1

Exception raised from reportToTensorTypeError at /Users/distiller/project/aten/src/ATen/core/ivalue.cpp:854 (most recent call first):
frame #0: _ZN3c105ErrorC1ENS_14SourceLocationENSt3__112basic_stringIcNS2_11char_traitsIcEENS2_9allocatorIcEEEE + 75 (0x102b6b191 in ObjectDetection)
frame #1: _ZN3c106detail14torchCheckFailEPKcS2_jRKNSt3__112basic_stringIcNS3_11char_traitsIcEENS3_9allocatorIcEEEE + 106 (0x102b69d6d in ObjectDetection)
frame #2: _ZNK3c106IValue23reportToTensorTypeErrorEv + 78 (0x1025ffd60 in ObjectDetection)
frame #3: _ZN3c104impl34call_functor_with_args_from_stack_INS0_6detail31WrapFunctionIntoRuntimeFunctor_IPFN2at6TensorERKS5_S7_ES5_NS_4guts8typelist8typelistIJS7_S7_EEEEELb0EJLm0ELm1EEJS7_S7_EEENSt3__15decayINSA_21infer_function_traitsIT_E4type11return_typeEE4typeEPNS_14OperatorKernelENS_14DispatchKeySetEPNSF_6vectorINS_6IValueENSF_9allocatorISR_EEEENSF_16integer_sequenceImJXspT1_EEEEPNSC_IJDpT2_EEE + 65 (0x101d36422 in ObjectDetection)
frame #4: _ZN3c104impl31make_boxed_from_unboxed_functorINS0_6detail31WrapFunctionIntoRuntimeFunctor_IPFN2at6TensorERKS5_S7_ES5_NS_4guts8typelist8typelistIJS7_S7_EEEEELb0EE4callEPNS_14OperatorKernelERKNS_14OperatorHandleENS_14DispatchKeySetEPNSt3__16vectorINS_6IValueENSM_9allocatorISO_EEEE + 24 (0x101e397c8 in ObjectDetection)
frame #5: _ZNK3c1010Dispatcher9callBoxedERKNS_14OperatorHandleEPNSt3__16vectorINS_6IValueENS4_9allocatorIS6_EEEE + 119 (0x102a4e1d1 in ObjectDetection)
frame #6: _ZN5torch3jit6mobile16InterpreterState3runERNSt3__16vectorIN3c106IValueENS3_9allocatorIS6_EEEE + 5049 (0x102a5ae3f in ObjectDetection)
frame #7: _ZNK5torch3jit6mobile8Function3runERNSt3__16vectorIN3c106IValueENS3_9allocatorIS6_EEEE + 160 (0x102a4c5ec in ObjectDetection)
frame #8: _ZNK5torch3jit6mobile6Method3runERNSt3__16vectorIN3c106IValueENS3_9allocatorIS6_EEEE + 766 (0x102a5e5c8 in ObjectDetection)
frame #9: _ZNK5torch3jit6mobile6MethodclENSt3__16vectorIN3c106IValueENS3_9allocatorIS6_EEEE + 24 (0x102a5ecba in ObjectDetection)
frame #10: ZN5torch3jit6mobile6Module7forwardENSt3__16vectorIN3c106IValueENS3_9allocatorIS6_EEEE + 173 (0x102afe41d in ObjectDetection)
frame #11: -[InferenceModule + + (0x102afdbde in ObjectDetection)
frame #12: $s15ObjectDetection14ViewControllerC9runTappedyyypFyycfU
+ 1288 (0x102b17588 in ObjectDetection)
frame #13: $sIeg_IeyB_TR + 40 (0x102b08878 in ObjectDetection)
frame #14: _dispatch_call_block_and_release + 12 (0x10418ca28 in libdispatch.dylib)
frame #15: _dispatch_client_callout + 8 (0x10418dc0c in libdispatch.dylib)
frame #16: _dispatch_queue_override_invoke + 1054 (0x1041900a3 in libdispatch.dylib)
frame #17: _dispatch_root_queue_drain + 403 (0x10419fb44 in libdispatch.dylib)
frame #18: _dispatch_worker_thread2 + 196 (0x1041a05ec in libdispatch.dylib)
frame #19: _pthread_wqthread + 244 (0x7fff6bfeb417 in libsystem_pthread.dylib)
frame #20: start_wqthread + 15 (0x7fff6bfea42f in libsystem_pthread.dylib)

Please let me know how to resolve this issue. It is build for ioS version 14.0 by default.

error during build

object detection demo app support ios14.4, but my ios is 14.6.

did you update code for ios 14.6?

Are we able to use other object detection models like RetinaNet / Faster-RCNN on IOS ?

Regarding the object detection demo, it seems like YOLOv5 model forward function returns a tensor of fixed shape per image (25200*85). torchvision models for RetinaNet and Faster-RCNN return a list of dictionaries of objects detected per image. I was wondering if the c++ bridging code will be able to accommodate this variation in size as well as the reading of dictionary outputs?

Thank you for your help !

Global Variable for model output size

in the file ios-demo-app/ObjectDetection/ObjectDetection/Inference/InferenceModule.mm I'd like to set the output_size value in a global swift variable. I am not strong at bridging between Swift, Obejective C and Objective c++. Can someone help me ? In the utils/PrePostProcessor.swift I have already made the model output size values global.

Fatal error: Can't find the model file!: file

Hello. I'm going to apply custom semantic segmentation model on IOS device. (mobilenetV2 + SegNet)
But When I import the module, the problem occur.

OS: Mac OS 10.15.1
Python 3.7
Pytorch 1.3.1
Libtorch 1.3.1

Fatal error: Can't find the model file!: file /Users/aiel/Desktop/ios-demo-app/HelloWorld/HelloWorld/HelloWorld/ViewController.swift, line 11
2019-12-09 22:18:38.472617+0900 HelloWorld[37912:926691] Fatal error: Can't find the model file!: file /Users/aiel/Desktop/ios-demo-app/HelloWorld/HelloWorld/HelloWorld/ViewController.swift, line 11

I convert python model to trace model using jit, I see the file path is right by printing path in TorchModule.mm
this problem occurs when I call TorchModule, ViewController.swift, line 8 and TorchModule.mm, line 18

let module = TorchModule(fileAtPath: filePath)
_impl = torch::jit::load(filePath.UTF8String);

thanks for reading

pod install error

it's error to run pod install in path of HelloWorld. there is another way to install libtorch?

ImageSegmentation Build Failure on Mac M1

Hi, I met the compilation issue on Mac M1 as below

ld: in /Users/xxx/projects/ios-demo-app/ImageSegmentation/Pods/LibTorch/install/lib/libtorch.a(empty.cpp.o),
building for iOS Simulator, but linking in object file built for iOS,
file '/Users/xxx/projects/ios-demo-app/ImageSegmentation/Pods/LibTorch/install/lib/libtorch.a' for architecture arm64

The environment information is as below:
PyTorch: 1.8.1
Output of pod --version: 1.10.1
Cmake version: 3.11.3

I also tried to modify the Podfile from pod 'LibTorch', '~>1.7.0' to pod 'LibTorch', '~>1.8.0', but the failure still exists.

Model Not Found Issue With Custom Model

Hi All,

When using a custom model, I obtain the "Can't find the model file!" error ... I am using libTorch1.4.0, and can see that my file model.pt has the appropriate target membership, based on my initial googling/research the issue might be with my actual pytorch model (using a ResNet18 model). Thank you very much for your help.

private lazy var module: TorchModule = {
    if let filePath = Bundle.main.path(forResource: "model", ofType: "pt"),
        let module = TorchModule(fileAtPath: filePath) {
        return module
    } else {
        fatalError("Can't find the model file!")
    }
}()

Both apps crash in release mode!

PyTorch: 1.4.0
LibTorch: 1.4.0
iOS 13.5
XCode 11.5

Trying to deploy PyTorch model in iOS app. It works okay when I run it in debug mode. Model shows results. On changing to release mode, the app crashes.
Even running both tutorial apps "HelloWorld" and "PyTorchDemo" tend to crash in release mode.

#11 0x00000001035f95f4 in torch::jit::script::Module::forward(std::__1::vector<c10::IValue, std::__1::allocator<c10::IValue> >) /ios-demo-app-master/PyTorchDemo/Pods/LibTorch/install/include/torch/csrc/jit/script/module.h:113

Here's is the function in module.h:

 IValue forward(std::vector<IValue> inputs) {
    return get_method("forward")(std::move(inputs)); //breaks here
  }

Add example of how to optimize model for mobile inference

This demo is great and works fine although it would be great to have an example of how to prepare model for mobile inference cause it's non trivial. For example you can add the receipt of how you've prepare the mobilenet_quantized.pt.
(Personally i've tried to convert my model to float16 (it didn't work: model didn't load on mobile), also i've tried torch.quantization.quantize and it also didn't work.
Tnx!

Problem Raised when I use the size of width512, and height 256

Hi thx for your nice demo repo, just wondering whether you are familiar with the problem when I set the input width as 512 and height as 256(training size) for the ImageSegmentation demo (The demo works on 512x512.).

Out:
Thread 1: EXC_BAD_ACCESS (code=1, address=0x0)

ImageSegmentation[78619:5076149] torch.cat(): Sizes of tensors must match except in dimension 1. Got 16 and 12 in dimension 3 (The offending index is 3)
  
  debug_handle:-1
  
Exception raised from check_cat_shape_except_dim at /Users/distiller/project/aten/src/ATen/native/TensorShape.cpp:114 (most recent call first):
frame #0: _ZN3c105ErrorC1ENS_14SourceLocationENSt3__112basic_stringIcNS2_11char_traitsIcEENS2_9allocatorIcEEEE + 75 (0x10a5479f1 in ImageSegmentation)
frame #1: _ZN3c106detail14torchCheckFailEPKcS2_jRKNSt3__112basic_stringIcNS3_11char_traitsIcEENS3_9allocatorIcEEEE + 106 (0x10a5465cd in ImageSegmentation)
frame #2: _ZN2at6native12_cat_out_cpuEN3c108ArrayRefINS_6TensorEEExRS3_ + 3129 (0x10a28e803 in ImageSegmentation)
frame #3: _ZN2at6native8_cat_cpuEN3c108ArrayRefINS_6TensorEEEx + 162 (0x10a28ec54 in ImageSegmentation)
frame #4: _ZN2at12_GLOBAL__N_112_GLOBAL__N_112wrapper__catEN3c108ArrayRefINS_6TensorEEEx + 14 (0x1096cfe35 in ImageSegmentation)
frame #5: _ZN3c104impl28wrap_kernel_functor_unboxed_INS0_6detail31WrapFunctionIntoRuntimeFunctor_IPFN2at6TensorENS_8ArrayRefIS5_EExES5_NS_4guts8typelist8typelistIJS7_xEEEEES8_E4callEPNS_14OperatorKernelENS_14DispatchKeySetES7_x + 24 (0x10984f82c in ImageSegmentation)
frame #6: _ZNK3c1010Dispatcher4callIN2at6TensorEJNS_8ArrayRefIS3_EExEEET_RKNS_19TypedOperatorHandleIFS6_DpT0_EEES9_ + 148 (0x109494b5a in ImageSegmentation)
frame #7: _ZN2at4_catEN3c108ArrayRefINS_6TensorEEEx + 65 (0x109449cfe in ImageSegmentation)
frame #8: _ZN2at6native3catEN3c108ArrayRefINS_6TensorEEEx + 3155 (0x10a28fb8e in ImageSegmentation)
frame #9: _ZN2at12_GLOBAL__N_112_GLOBAL__N_111wrapper_catEN3c108ArrayRefINS_6TensorEEEx + 14 (0x10977be54 in ImageSegmentation)
frame #10: _ZN3c104impl34call_functor_with_args_from_stack_INS0_6detail31WrapFunctionIntoRuntimeFunctor_IPFN2at6TensorENS_8ArrayRefIS5_EExES5_NS_4guts8typelist8typelistIJS7_xEEEEELb0EJLm0ELm1EEJS7_xEEENSt3__15decayINSA_21infer_function_traitsIT_E4type11return_typeEE4typeEPNS_14OperatorKernelENS_14DispatchKeySetEPNSF_6vectorINS_6IValueENSF_9allocatorISR_EEEENSF_16integer_sequenceImJXspT1_EEEEPNSC_IJDpT2_EEE + 182 (0x1097a88fe in ImageSegmentation)
frame #11: _ZN3c104impl31make_boxed_from_unboxed_functorINS0_6detail31WrapFunctionIntoRuntimeFunctor_IPFN2at6TensorENS_8ArrayRefIS5_EExES5_NS_4guts8typelist8typelistIJS7_xEEEEELb0EE4callEPNS_14OperatorKernelERKNS_14OperatorHandleENS_14DispatchKeySetEPNSt3__16vectorINS_6IValueENSM_9allocatorISO_EEEE + 24 (0x1097284d0 in ImageSegmentation)
frame #12: _ZNK3c1010Dispatcher9callBoxedERKNS_14OperatorHandleEPNSt3__16vectorINS_6IValueENS4_9allocatorIS6_EEEE + 119 (0x10a438cc1 in ImageSegmentation)
frame #13: _ZN5torch3jit6mobile16InterpreterState3runERNSt3__16vectorIN3c106IValueENS3_9allocatorIS6_EEEE + 5049 (0x10a44592f in ImageSegmentation)
frame #14: _ZNK5torch3jit6mobile8Function3runERNSt3__16vectorIN3c106IValueENS3_9allocatorIS6_EEEE + 160 (0x10a4370dc in ImageSegmentation)
frame #15: _ZNK5torch3jit6mobile6Method3runERNSt3__16vectorIN3c106IValueENS3_9allocatorIS6_EEEE + 766 (0x10a4490b8 in ImageSegmentation)
frame #16: _ZNK5torch3jit6mobile6MethodclENSt3__16vectorIN3c106IValueENS3_9allocatorIS6_EEEE + 24 (0x10a4497aa in ImageSegmentation)
frame #17: _ZN5torch3jit6mobile6Module7forwardENSt3__16vectorIN3c106IValueENS3_9allocatorIS6_EEEE + 176 (0x10a4e9b70 in ImageSegmentation)
frame #18: -[TorchModule + + (0x10a4e8f9e in ImageSegmentation)
frame #19: $s17ImageSegmentation14ViewControllerC7doInferyyypFyycfU_ + 1505 (0x10a4f44e1 in ImageSegmentation)
frame #20: $sIeg_IeyB_TR + 48 (0x10a4f4fc0 in ImageSegmentation)
frame #21: _dispatch_call_block_and_release + 12 (0x10bc168ac in libdispatch.dylib)
frame #22: _dispatch_client_callout + 8 (0x10bc17a88 in libdispatch.dylib)
frame #23: _dispatch_queue_override_invoke + 1032 (0x10bc19f06 in libdispatch.dylib)
frame #24: _dispatch_root_queue_drain + 351 (0x10bc295b6 in libdispatch.dylib)
frame #25: _dispatch_worker_thread2 + 135 (0x10bc29f1b in libdispatch.dylib)
frame #26: _pthread_wqthread + 220 (0x7fff5dcd89f7 in libsystem_pthread.dylib)
frame #27: start_wqthread + 15 (0x7fff5dcd7b77 in libsystem_pthread.dylib)

I guess some problems might raise during going through the elements of output(self.imageHelper.convertRGBBuffer).
Any ideas to solve it? Thanks!

Replace the .pt trained network

Hi,

I hope this is not too much off topic.

I have a different trained PyTorch network that recognises only pipes. I have replaced the mobilenet_quantized.pt file with my .pt network file and words.txt with a file containing only one object: pipe.

However, I'm not sure what your score means, with my network is negative most of the time. Could you please explain how I can get the pure probability of object recognition and object coordinates from the inference?

Thank you!
Paul

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.