Coder Social home page Coder Social logo

xmartlabs / bender Goto Github PK

View Code? Open in Web Editor NEW
1.8K 57.0 95.0 71.03 MB

Easily craft fast Neural Networks on iOS! Use TensorFlow models. Metal under the hood.

Home Page: https://xmartlabs.github.io/Bender/

License: MIT License

Swift 89.40% Ruby 0.23% Metal 8.31% Objective-C 2.05%
machine-learning neural-networks metal apple iphone ios convolutional-neural-networks deep-learning swift deep-neural-networks

bender's People

Contributors

adamnemecek avatar bryant1410 avatar chachamatcha avatar enrigalmig avatar fossabot avatar mats-claassen avatar mtnbarreto avatar noblakit01 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

bender's Issues

Issue with Carthage

Hi,
I just try install your component on a test project with Carthage
My Cartfile:

github "xmartlabs/Bender"

Carthage output:

$ carthage update --platform iOS
*** Cloning Bender
*** Checking out Bender at "0.1.0"
*** xcodebuild output can be found in /var/folders/rw/gbr9wvs91cz91mcpgdxv4q340000gn/T/carthage-xcodebuild.XMMgOU.log
*** Building scheme "Palladium" in Bender.xcworkspace
** BUILD FAILED **


The following build commands failed:
	CompileSwift normal armv7
	CompileSwiftSources normal armv7 com.apple.xcode.tools.swift.compiler
(2 failures)
/Volumes/Perso/Temp/BenderTest/Carthage/Checkouts/Bender/Sources/Adapters/Tensorflow/protos/attr_value.pb.swift:10:8: error: no such module 'SwiftProtobuf'
A shell task (/usr/bin/xcrun xcodebuild -workspace /Volumes/Perso/Temp/BenderTest/Carthage/Checkouts/Bender/Bender.xcworkspace -scheme Palladium -configuration Release -sdk iphoneos ONLY_ACTIVE_ARCH=NO BITCODE_GENERATION_MODE=bitcode CODE_SIGNING_REQUIRED=NO CODE_SIGN_IDENTITY= CARTHAGE=YES clean build) failed with exit code 65:
** BUILD FAILED **


The following build commands failed:
	CompileSwift normal armv7
	CompileSwiftSources normal armv7 com.apple.xcode.tools.swift.compiler
(2 failures)

When I open the project I just see one iOS target 'Palladium' (why Palladium and not MetalBender iOS 😦)
If it's can help here a tutorial to create a cross platform framework compatible with macOS, iOS, tvOS and watchOS and with a clean support of Carthage: Swift Cross Platform Framework
I hope this will help 😉

Convert model from Tensor flow

Dear @bryant1410 ,

I want to

  1. Trained 1 more flower using : https://www.tensorflow.org/tutorials/image_retraining
  2. then using your guide to import model from step 1 to use with Blender but I can not convert and import
    https://github.com/xmartlabs/Bender/blob/master/Documentation/Importing.md
    ->
    benderthon tf-freeze checkpoint_path.ckpt graph_with_weights.pb Output_Node_Name
    what is checkpoint_path.ckpt where I can get it, what is out put of the script ?
  3. please support if you have time

Thanks so much

Can't Build on xcode 8 [Macpods]

Hey I can't build the project in xcode 8.
I can build in xcode 9 beta BUT when I try to deploy to a actual device I get the same error
This is the error message is:

Undefined symbols for architecture arm64:
"OBJC_CLASS$_MPSCNNSpatialNormalization", referenced from:
objc-class-ref in SpatialNorm.o
"OBJC_CLASS$_MPSCNNSoftMax", referenced from:
objc-class-ref in Softmax.o
"OBJC_CLASS$_MPSImageLanczosScale", referenced from:
objc-class-ref in Scale.o
objc-class-ref in Start.o
"OBJC_CLASS$_MPSCNNPoolingMax", referenced from:
objc-class-ref in Pooling.o
"OBJC_CLASS$_MPSCNNPoolingAverage", referenced from:
objc-class-ref in Pooling.o
"OBJC_CLASS$_MPSTemporaryImage", referenced from:
objc-class-ref in ConvTranspose.o
objc-class-ref in Start.o
"OBJC_CLASS$_MPSImageDescriptor", referenced from:
objc-class-ref in Add.o
objc-class-ref in BGRAtoRGBA.o
objc-class-ref in Concat-2F047EC0A4EC20B2.o
objc-class-ref in Convolution.o
objc-class-ref in ConvTranspose.o
objc-class-ref in Crop.o
objc-class-ref in FullyConnected.o
...
"OBJC_CLASS$_MPSCNNConvolution", referenced from:
objc-class-ref in Convolution.o
"OBJC_CLASS$_MPSImage", referenced from:
objc-class-ref in Add.o
objc-class-ref in BGRAtoRGBA.o
objc-class-ref in Concat-2F047EC0A4EC20B2.o
objc-class-ref in Convolution.o
objc-class-ref in ConvTranspose.o
objc-class-ref in Crop.o
objc-class-ref in FullyConnected.o
...
"OBJC_CLASS$_MPSCNNNeuronSigmoid", referenced from:
objc-class-ref in ActivationNeuronType.o
"OBJC_CLASS$_MPSCNNConvolutionDescriptor", referenced from:
objc-class-ref in Convolution.o
objc-class-ref in FullyConnected.o
"OBJC_CLASS$_MPSCNNNeuron", referenced from:
objc-class-ref in ActivationNeuronType.o
objc-class-ref in Neuron.o
"OBJC_CLASS$_MPSCNNFullyConnected", referenced from:
objc-class-ref in FullyConnected.o
"OBJC_CLASS$_MPSCNNNeuronReLU", referenced from:
objc-class-ref in ActivationNeuronType.o
"OBJC_CLASS$_MPSCNNNeuronLinear", referenced from:
objc-class-ref in ActivationNeuronType.o
"OBJC_CLASS$_MPSCNNPooling", referenced from:
objc-class-ref in Pooling.o
"OBJC_CLASS$_MPSCNNNeuronTanH", referenced from:
objc-class-ref in ActivationNeuronType.o
ld: symbol(s) not found for architecture arm64
clang: error: linker command failed with exit code 1 (use -v to see invocation)

No such module 'SwiftProtobuf'

I had to manually add the SwiftProtobuf dependency with Pods which worked fine, but I can't get the example project to run as it looks like Bender is imported directly into the project.

macOS support

Seems like a great AI framework for iOS. I wish to use your framework on macOS.

My podfile looks something like this:

swift_version = "5.0"
platform :osx, '10.14'
pod 'MetalBender', :git => 'https://github.com/xmartlabs/Bender.git', :commit => '512ea171950d1ab997b05cf469908fbfc48060e6'

Doing install, says that MetalBender is not available for macOS.

PROMPT> pod install
Analyzing dependencies
Pre-downloading: `MetalBender` from `https://github.com/xmartlabs/Bender.git`, commit `512ea171950d1ab997b05cf469908fbfc48060e6`
[!] The platform of the target `FreeHongKong` (macOS 10.14) is not compatible with `MetalBender (0.5.0)`, which does not support `macOS`.

Best of luck. Thank you for open sourcing your project.

Metal 2 Support

With the beta release of Metal 2 yesterday by Apple, is there a development timeline for supporting this new framework?

Cannot create ConcatV2. Missing or invalid attribute axis

I get the following error when converting the graph containing a concat node with axis int_value greater than 2.

fatal error: Cannot create ConcatV2. Missing or invalid attribute axis.: file <omited>/Pods/MetalBender/Sources/Adapters/Tensorflow/TFConverter+Mappers.swift, line 162

And the node in question from the .pbtxt:

node {
  name: "fire2/concat/concat/axis"
  op: "Const"
  attr {
    key: "dtype"
    value {
      type: DT_INT32
    }
  }
  attr {
    key: "value"
    value {
      tensor {
        dtype: DT_INT32
        tensor_shape {
        }
        int_val: 3
      }
    }
  }
}

And if I follow correctly the code it executes the LayerConverter+Mappers.swift calls the LayerSize.fromTf with the value 3. which in return is as follow:

    static func fromTF(index: Int) -> LayerSizeAxis? {
        switch index {
        case 0:
            return .w
        case 1:
            return .h
        case 2:
            return .f
        default:
            return nil
        }
    }
}

This evaluates as nil and throws an error.

Is there an error with the graph I'm trying to convert or is a bug in the Bender that it does not support concat indexes beyond axis=2 on a concat layer?

Environment:

  • Xcode 8.3.3,
  • iOS: 10.3.2,
  • SDK version: 10.3

Example does not compile

Cloning the repo with checkout on tag 0.4.0
Example has seven errors after pod installation (0.4.0)
Seems to be removed at compile but all three frameworks installed by cocoapods are not found at compil time

"PBXCp /Users/thibautnoah/Library/Developer/Xcode/DerivedData/Example-fheatxxixelckdgltxeulpnuyhru/Build/Products/Debug-iphoneos/MetalBender.framework /Users/thibautnoah/Library/Developer/Xcode/DerivedData/Example-fheatxxixelckdgltxeulpnuyhru/Build/Products/Debug-iphoneos/Example.app/Frameworks/MetalBender.framework
cd /Users/thibautnoah/testing/Bender/Example
export PATH="/Applications/Xcode.app/Contents/Developer/Platforms/iPhoneOS.platform/Developer/usr/bin:/Applications/Xcode.app/Contents/Developer/usr/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin"
builtin-copy -exclude .DS_Store -exclude CVS -exclude .svn -exclude .git -exclude .hg -exclude Headers -exclude PrivateHeaders -exclude Modules -exclude *.tbd -bitcode-strip replace-with-marker -bitcode-strip-tool /Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/bitcode_strip -resolve-src-symlinks /Users/thibautnoah/Library/Developer/Xcode/DerivedData/Example-fheatxxixelckdgltxeulpnuyhru/Build/Products/Debug-iphoneos/MetalBender.framework /Users/thibautnoah/Library/Developer/Xcode/DerivedData/Example-fheatxxixelckdgltxeulpnuyhru/Build/Products/Debug-iphoneos/Example.app/Frameworks"

"error: /Users/thibautnoah/Library/Developer/Xcode/DerivedData/Example-fheatxxixelckdgltxeulpnuyhru/Build/Products/Debug-iphoneos/MetalBender.framework: No such file or directory"

Simple Non-Image Based Example

Hey @mats-claassen thank you for building this amazing library! This is exactly what I was looking for!!!

I noticed you have a lot of examples with Image based ML and am struggling to get started with a simple non-image example. I have a model built in tensorflow that is 1 layer using Sigmoid that takes in 20 features and outputs 1 label, Binary Classification. I am struggling to get my model to run successfully on Bender. Here are the errors I am seeing.

2017-06-29 17:48:17.286368-0700 bender_test[409:138318] libMobileGestalt MobileGestaltSupport.m:153: pid 409 (bender_test) does not have sandbox access for frZQaeyWLUvLjeuEK43hmg and IS NOT appropriately entitled
2017-06-29 17:48:17.286403-0700 bender_test[409:138318] libMobileGestalt MobileGestalt.c:550: no access to InverseDeviceID (see <rdar://problem/11744455>)
2017-06-29 17:48:17.391311-0700 bender_test[409:138318] /BuildRoot/Library/Caches/com.apple.xbs/Sources/MetalImage/MetalImage-61.5/MetalPerformanceShaders/Filters/MPSKernel.mm, line 370: error 'Source 0x101118830 texture type must be MTLTextureType2D
'
/BuildRoot/Library/Caches/com.apple.xbs/Sources/MetalImage/MetalImage-61.5/MetalPerformanceShaders/Filters/MPSKernel.mm:370: failed assertion `Source 0x101118830 texture type must be MTLTextureType2D'

Do you have an example I can look at where the input MPSImage is composed of a 20 feature tensor?

Batch inference API

Is there a batch inference API available? In our use case, we need to perform inference on several (up to 100+) cropped parts of an image. A batch inference API would make this operation efficient. Thanks.

Can't compile with Xcode 9.3, iOS 11.3

After installing from the cocopod, I get this error on iOS 11.3, Xcode 9.3. Bender version 0.4.1

Type of expression is ambiguous without more context

It appears in the Convolution layer, as well as the DepthwiseConvolution.

Convolution.swift: line 114

screen shot 2018-04-07 at 4 42 39 pm

and again in DepthwiseConvolution.swift: line 41

screen shot 2018-04-07 at 4 43 25 pm

Any ideas on how to get this working?

Iphone 6 plus style transfer example not working on iOS 11

(SOLVED:) updating to latest xcode beta version solved the issue.

(UPDATE:) I just realised i was testing on iOS 11, the example runs ok with iOS 10 with the latest pull request branch. #39

However Im still posting this as an issue for iOS 11.

Xcode version: 8.3.2

This is the console output:
ALL TEST PASSED "Unknown input: ^moments/sufficient_statistics/mean_ss" "Unknown input: ^moments/sufficient_statistics/var_ss" "Unknown input: ^moments_1/sufficient_statistics/mean_ss" "Unknown input: ^moments_1/sufficient_statistics/var_ss" "Unknown input: ^moments_2/sufficient_statistics/mean_ss" "Unknown input: ^moments_2/sufficient_statistics/var_ss" "Unknown input: ^moments_3/sufficient_statistics/mean_ss" "Unknown input: ^moments_3/sufficient_statistics/var_ss" "Unknown input: ^moments_4/sufficient_statistics/mean_ss" "Unknown input: ^moments_4/sufficient_statistics/var_ss" "Unknown input: ^moments_5/sufficient_statistics/mean_ss" "Unknown input: ^moments_5/sufficient_statistics/var_ss" "Unknown input: ^moments_6/sufficient_statistics/mean_ss" "Unknown input: ^moments_6/sufficient_statistics/var_ss" "Unknown input: ^moments_7/sufficient_statistics/mean_ss" "Unknown input: ^moments_7/sufficient_statistics/var_ss" "Unknown input: ^moments_8/sufficient_statistics/mean_ss" "Unknown input: ^moments_8/sufficient_statistics/var_ss" "Unknown input: ^moments_9/sufficient_statistics/mean_ss" "Unknown input: ^moments_9/sufficient_statistics/var_ss" "Unknown input: ^moments_10/sufficient_statistics/mean_ss" "Unknown input: ^moments_10/sufficient_statistics/var_ss" "Unknown input: ^moments_11/sufficient_statistics/mean_ss" "Unknown input: ^moments_11/sufficient_statistics/var_ss" "Unknown input: ^moments_12/sufficient_statistics/mean_ss" "Unknown input: ^moments_12/sufficient_statistics/var_ss" "Unknown input: ^moments_13/sufficient_statistics/mean_ss" "Unknown input: ^moments_13/sufficient_statistics/var_ss" "Set up takes:: 1.77025699615479 (0.564889731927128 per second)"

Screenshot of the example after clicking run of course:
http://imgur.com/a/WIUHE

Correct Installation?

Noob question: Downloading the git and installing the example project using the cartfile works, but when incorporating Bender into my own project the import clearly fails. I added the lines

platform :ios, '10.0'
pod 'MetalBender', :git => 'https://github.com/xmartlabs/Bender.git'

to my Podfile, but importing 'Bender' fails with the "No such module". Sorry if this is non-bender specific as I don't really know Swift :)

Get the weight of each layer of neural network

styleNet = Network(device: device, inputSize: inputSize, parameterLoader: loader)

styleNet.start
->> Convolution(size: ConvSize(outputChannels: 32, kernelSize: 9, stride: 1), id: “conv1”)
->> Convolution(size: ConvSize(outputChannels: 64, kernelSize: 3, stride: 2), id: “conv2”)
->> Convolution(size: ConvSize(outputChannels: 128, kernelSize: 3, stride: 2), id: “conv3”)
->> Residual(size: ConvSize(outputChannels: 128, kernelSize: 3, stride: 1), id: “res_block1”)
->> Residual(size: ConvSize(outputChannels: 128, kernelSize: 3, stride: 1), id: “res_block2”)
->> Residual(size: ConvSize(outputChannels: 128, kernelSize: 3, stride: 1), id: “res_block3”)
->> Residual(size: ConvSize(outputChannels: 128, kernelSize: 3, stride: 1), id: “res_block4”)
—>> ConvTranspose(size: ConvSize(outputChannles: 64, kernelSize: 3, stride: 2), id: “convt1”)
—>> ConvTranspose(size: ConvSize(outputChannles: 32, kernelSize: 3, stride: 2), id: “convt2”)
->> Convolution(size: ConvSize(outputChannels: 3, kernelSize: 9, stride: 1), neuron: .tanh, id: “convFinal”)

1.I want to do two pool layers for the add operation
How to get the weight of each layer of neural network?
2.Model.run the self? .network. Run is going to be returned to tensor, I need to take care of it, don't return the label?

Add only works for two layers of the same size

Hi, so im working on setting up a fast style transfer network in code instead of importing it from a .pb file, so i can start improving it.

However im having some issues trying to setup a residual layer, heres the network.

        styleNet.start
            ->> Convolution(convSize: ConvSize(outputChannels: 32, kernelSize: 9, stride: 1), neuronType: .none, id: "Variable_0")
            ->> InstanceNorm(shiftModifier: "1", scaleModifier: "2", id: "Variable_")
            ->> Neuron( type: .relu)
            ->> Convolution(convSize: ConvSize(outputChannels: 64, kernelSize: 3, stride: 2), neuronType: .none, id: "Variable_3")
            ->> InstanceNorm(shiftModifier: "4", scaleModifier: "5", id: "Variable_")
            ->> Neuron( type: .relu)
            ->> Convolution(convSize: ConvSize(outputChannels: 128, kernelSize: 3, stride: 2), neuronType: .none, id: "Variable_6")
            ->> InstanceNorm(shiftModifier: "7", scaleModifier: "8", id: "Variable_")
            ->> Neuron( type: .relu)
            ->> ResidualLayer(convSize: ConvSize(outputChannels: 128, kernelSize: 3, stride: 1), layers:
                Convolution(convSize: ConvSize(outputChannels: 128, kernelSize: 3, stride: 1), neuronType: .none, id: "Variable_9")
                    ->> InstanceNorm(shiftModifier: "10", scaleModifier: "11", id: "Variable_")
                    ->> Neuron( type: .relu)
                    ->> Convolution(convSize: ConvSize(outputChannels: 128, kernelSize: 3, stride: 1), neuronType: .none, id: "Variable_12")
                    ->> InstanceNorm(shiftModifier: "13", scaleModifier: "14", id: "Variable_"))
           

When initializing the following error appears: assertion failed: Add works for two layers of the same size: file /Users/Andre/Downloads/Bender-fix-issue-38/Sources/Layers/Add.swift, line 23

I also print the layers in the network.


"PRINTING LAYERS"
": Bender.Start"
": Bender.Convolution"
": Bender.InstanceNorm"
": Bender.Neuron"
": Bender.Convolution"
": Bender.InstanceNorm"
": Bender.Neuron"
": Bender.Convolution"
": Bender.InstanceNorm"
": Bender.Neuron"
": Bender.Dummy"
": Bender.Identity"
": Bender.Add"
assertion failed: Add works for two layers of the same size: file /Users/Andre/Downloads/Bender-fix-issue-38/Sources/Layers/Add.swift, line 23

If you wondering what kind of network im trying to emulate its this one:
https://github.com/lengstrom/fast-style-transfer/blob/master/src/transform.py

def net(image):
    conv1 = _conv_layer(image, 32, 9, 1)
    conv2 = _conv_layer(conv1, 64, 3, 2)
    conv3 = _conv_layer(conv2, 128, 3, 2)
    resid1 = _residual_block(conv3, 3)
    resid2 = _residual_block(resid1, 3)
    resid3 = _residual_block(resid2, 3)
    resid4 = _residual_block(resid3, 3)
    resid5 = _residual_block(resid4, 3)
    conv_t1 = _conv_tranpose_layer(resid5, 64, 3, 2)
    conv_t2 = _conv_tranpose_layer(conv_t1, 32, 3, 2)
    conv_t3 = _conv_layer(conv_t2, 3, 9, 1, relu=False)
    preds = tf.nn.tanh(conv_t3) * 150 + 255./2
    return preds

Note: i have changed instance norm input to allow specific shift/scale modifiers, its a temporary way to just setup the weight ids.

Where were the style transfer example models taken from?

Hi, if possible, do you guys know from what example the q_and_w.pb and g_and_w2.pb files were taken from. Im trying to get similar style transfer models to work with bender but the list of unsupported operations varies widely.

If you trained them yourself it would be helpful to know more information about those models.

PS:i would have posted this on stack-overflow but i don't have the rep to create the bender tag.
Thanks in advance!

Convolution updateWeights function needs to be modified.

open func updateWeights(device: MTLDevice) {
        guard let network = network else {
            return
        }

        if #available(iOS 11.0, *) {
            if let weightsPointer = weightsPointer {
                dataSource = ConvolutionDataSource(cnnDescriptor: cnnDescriptor,
                                                   weights: UnsafeMutableRawPointer(mutating: weightsPointer.pointer()),
                                                   bias: UnsafeMutablePointer(mutating: biasPointer?.pointer() as UnsafePointer<Float>?))
            } else {
                dataSource = ConvolutionDataSource(cnnDescriptor: cnnDescriptor, parameterLoader: network.parameterLoader,
                                                   layerId: id, weightCount: getWeightsSize(), biasCount:  convSize.outputChannels)
            }
            makeConv(device: device, weights: nil, bias: nil)
        } else {
            let weights = weightsPointer?.pointer() ?? network.parameterLoader.loadWeights(for: id,
                                                                                           modifier: Convolution.weightModifier,
                                                                                           size: getWeightsSize())

            var bias: UnsafePointer<Float>? = nil
            if useBias {
                bias = biasPointer?.pointer() ?? network.parameterLoader.loadWeights(for: id,
                                                                                     modifier: Convolution.biasModifier,
                                                                                     size: convSize.outputChannels)
            }
            makeConv(device: device, weights: weights, bias: bias)
        }
    }

i think that "ConvolutionDataSource" code will be modified.
This code does not consider whether to use bias or not.
So I suggest the following code.

else {
                dataSource = ConvolutionDataSource(cnnDescriptor: cnnDescriptor, parameterLoader: network.parameterLoader,
                                                   layerId: id, weightCount: getWeightsSize(), biasCount: useBias ? convSize.outputChannels : 0)
            }

Thank you.

fatal error: Cannot create ConcatV2. Missing or invalid attribute axis.

Hello, just update to your new version, seems to work but i got this error when loading my pb model.

guard let axisInt32 = axisNode.nodeDef.attr["value"]?.tensor.intVal.first, let axis = LayerSizeAxis.fromTF(index: Int(axisInt32)) else {
fatalError("Cannot create (Constants.Ops.Concat). Missing or invalid attribute axis.")
}

here axistInt32 = 3 , but cannot create concatv2 when axis = LayerSizeAxis.fromTF(index: Int(axisInt32))
Any idea?
I am basically trying to convert my pb model which is one derived from https://github.com/Joker316701882/Salience-Object-Detection
and then froze.
Also some warning
"Unknown input: pool4_1:1"
"Unknown input: pool4:1"
"Unknown input: pool3:1"
"Unknown input: pool2:1"
"Unknown input: pool1:1"
thanks a lot

clarification tensorflow + magenta .mag file / cyclic redundancy problem

Awesome job on this project.

I'm looking to extend functionality over to support magenta.
Yes, granted current library isn't rigged up to do this.
https://github.com/tensorflow/magenta

I added a .mag file from google's tensorflow / magenta project.

They are using a metagraph file - this contains a few extra protobuf files needed from tensorflow.
Tensorflow_MetaGraphDef /MetaInfoDef and others etc

(Side Note - feel free to cherry pick this stash - my branch includes all files
https://github.com/johndpope/swift-grpc-tensorflow/tree/0.0.2
If you ever find yourself wanting to programmatically convert proto files - you may also want to checkout https://github.com/nubbel/swift-tensorflow/blob/master/RUNME.sh )

so I rigged up a new wrapper to extract the graph def from a metagraphdef
https://github.com/johndpope/Bender/

if let path = Bundle.main.path(forResource: "attention_rnn", ofType: "mag"){

        let model = try? Data(contentsOf:URL(fileURLWithPath: path))
        
        // pull apart inception model.
        let myGraphProto = try? Tensorflow_Magenta_GeneratorBundle.init(serializedData:model!)
        
        let metaGraphData = myGraphProto!.metagraphFile
        
        let graph1 = try? Tensorflow_MetaGraphDef.init(serializedData:metaGraphData)
        print("graph1:",graph1)
        https://gist.github.com/johndpope/7be8b9b365050ad9615938ae15c36ac4
       
        // this errors.
          var graph =  TFGraph(graphDef: graph1.graphDef)
    }

could there be a specific nip and tuck that could be done to get this to work?
https://gist.github.com/johndpope/7be8b9b365050ad9615938ae15c36ac4

related
magenta/magenta#710

Assertion error with Mobilenet on iOS 10 device

I am using latest master branch code and testing the new added Mobilenet sample code in our own project.

  • Environment:
    Xcode 9.2, iOS 11.2 SDK, iPhone 7 running iOS 10.3.3.

  • Assertion error:
    Number of network inputs xx and input sizes xx are not equal. Line 109, Network.swift.

We also tested in iOS 11 without meeting the assertion error, since this framework is supposed to be working on iOS 10+, I treat this issue as a defect.

about depthwise_conv2d enhance

Thank you for this API to DL on iOS. The depthwise_conv2d is absent in Supported Layers.md now. Depthwise_conv2d is very important layer for MobileNet which runs in mobile quickly.Do you have a plan to support it.

Example.ConcatTest: Compiler failed to build request fatal error: Unable to create pipeline state, check metal shaders: file

Environment: Xcode: Version 8.3.3 (8E3004b)
iOS: 10.3
Device: iPhone 6 Plus

Stacktrace:

2017-07-10 18:54:19.618344-0400 Example[1174:306198] [DYMTLInitPlatform] platform initialization successful
2017-07-10 18:54:19.753218-0400 Example[1174:305990] Metal GPU Frame Capture Enabled
2017-07-10 18:54:19.754074-0400 Example[1174:305990] Metal API Validation Enabled
2017-07-10 18:54:19.808660-0400 Example[1174:305990] libMobileGestalt MobileGestaltSupport.m:153: pid 1174 (Example) does not have sandbox access for frZQaeyWLUvLjeuEK43hmg and IS NOT appropriately entitled
2017-07-10 18:54:19.808771-0400 Example[1174:305990] libMobileGestalt MobileGestalt.c:550: no access to InverseDeviceID (see rdar://problem/11744455)
Running test: Example.TextureConversionTest
Running test: Example.LocalResponseNormTest
Running test: Example.InstanceNormTest
Running test: Example.ConcatTest
2017-07-10 18:54:20.203515-0400 Example[1174:306178] Compiler failed to build request
fatal error: Unable to create pipeline state, check metal shaders: file /LATEST_RESEARCH/LATEST/Bender/Sources/Core/MetalShaderManager.swift, line 61
2017-07-10 18:54:20.205110-0400 Example[1174:305990] fatal error: Unable to create pipeline state, check metal shaders: file /LATEST_RESEARCH/LATEST/Bender/Sources/Core/MetalShaderManager.swift, line 61

Tensorflow - operation coverage / code generation fyi

not sure this helps -
but I built out some swift code to read in the ops.pb file that contains every tensorflow operation
and stubs out some swift code using stencilkit
https://github.com/johndpope/tensorflow/blob/master/tensorflow/swift/misc/OperationDefinitions.stencil
https://github.com/johndpope/tensorflow/tree/master/tensorflow/swift/Sources

(this was part of a wider attempt to port some golang tensorflow server side code to swift.
tensorflow/tensorflow#19 )

why could this be useful?

it maybe helpful to get wider coverage of operations not supported inside Bender.
could be helpful to point people where further help porting code is needed.
it may also clarify - which ops are not needed / or can't be supported.

Upsample layer?

Hello,
I need upsample layer to implement U-net for segmentation.
Is there one? Will it be implemented?

how to make bender data loader much fast?

I had met a problem, using blender for load some model's data from training framework,like pytorch.
I need to convert them from float32 into float16(half) in Metal kernel for processing, but I found it is very slow, so is any way to speed up for this.

How to use model with 2 input parameters?

I have style transfer model that have 2 input parameters - image and style number. Where can I add style number parameter? In .run there is only place fot image and completionHandler.

Unable to find a specification for `MetalPerformanceShadersProxy`

When running pod install with pod 'MetalBender', :git => 'https://github.com/xmartlabs/Bender.git' I get the following error in the terminal.

[!] Unable to find a specification for MetalPerformanceShadersProxy

Looking at the cocoa pods specs it seams the MetalPerformanceShadersProxy were removed a few hours ago. Did I miss something?

screen shot 2017-08-09 at 13 36 30

Cocoa Pods 1.3.1, Xcode 8.3.3, ios 10 with iphone 6

Tests fail in iOS 11.3.1 on iPhone 8+

In iOS 11.3.1 the tests for the Concat and LocalResponseNorm layers fail in iPhone 8 and iPhone X.
They work in iOS 11.3 (to my understanding) and also if run on iPhone 7 or below.

I am not sure this is a Bender or an iOS issue.

Pod Issue

when i add pod to the existing project I am facing Issue

/Pods/MetalBender/Sources/Graph/Weak.swift:11:26: 'class' must come first in the requirement list

Cann't we add it as pod file

Resize & Crop on Start node

Only when the image aspect ratio is not the expected:

  1. The scaledH & scaledW seem to be calculated wrong.
  2. I've seen some race conditions between the resize encode and the crop encode. Maybe something related to the MPSTemporaryImage. The image after the start node looks noisy.

Documentation for efficient storage of models

"Bender allows you to use efficient storage methods to keep the models up to 10x smaller in size. "
The bender website has this statement but I couldn't any documentation on this.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.