Coder Social home page Coder Social logo

neo-ai / tvm Goto Github PK

View Code? Open in Web Editor NEW

This project forked from apache/tvm

90.0 90.0 30.0 78.61 MB

Open deep learning compiler stack for cpu, gpu and specialized accelerators

Home Page: https://tvm.ai

License: Apache License 2.0

CMake 0.77% Makefile 0.22% Java 0.67% Shell 0.76% C++ 38.85% Python 54.95% Objective-C 0.08% Objective-C++ 0.29% Rust 1.29% Go 0.37% C 1.18% JavaScript 0.05% HTML 0.01% RenderScript 0.01% Cuda 0.06% TypeScript 0.32% Batchfile 0.01% Cython 0.10%
compiler deep-learning machine-learning performance

tvm's Introduction

DLR

DLR is a compact, common runtime for deep learning models and decision tree models compiled by AWS SageMaker Neo, TVM, or Treelite. DLR uses the TVM runtime, Treelite runtime, NVIDIA TensorRT™, and can include other hardware-specific runtimes. DLR provides unified Python/C++ APIs for loading and running compiled models on various devices. DLR currently supports platforms from Intel, NVIDIA, and ARM, with support for Xilinx, Cadence, and Qualcomm coming soon.

Installation

On x86_64 CPU targets running Linux, you can install latest release of DLR package via

pip install dlr

For installation of DLR on GPU targets or non-x86 edge devices, please refer to Releases for prebuilt binaries, or Installing DLR for building DLR from source.

Usage

import dlr
import numpy as np

# Load model.
# /path/to/model is a directory containing the compiled model artifacts (.so, .params, .json)
model = dlr.DLRModel('/path/to/model', 'cpu', 0)

# Prepare some input data.
x = np.random.rand(1, 3, 224, 224)

# Run inference.
y = model.run(x)

Release compatibility with different versions of TVM

Each release of DLR is capable of executing models compiled with the same corresponding release of neo-ai/tvm. For example, if you used the release-1.2.0 branch of neo-ai/tvm to compile your model, then you should use the release-1.2.0 branch of neo-ai/neo-ai-dlr to execute the compiled model. Please see DLR Releases for more information.

Documentation

For instructions on using DLR, please refer to Amazon SageMaker Neo – Train Your Machine Learning Models Once, Run Them Anywhere

Also check out the API documentation

Call Home Feature

You acknowledge and agree that DLR collects the following metrics to help improve its performance. By default, Amazon will collect and store the following information from your device:

record_type: <enum, internal record status, such as model_loaded, model_>, 
arch: <string, platform architecture, eg 64bit>, 
osname: <string, platform os name, eg. Linux>, 
uuid: <string, one-way non-identifable hashed mac address, eg. 8fb35b79f7c7aa2f86afbcb231b1ba6e>, 
dist: <string, distribution of os, eg. Ubuntu 16.04 xenial>, 
machine: <string, retuns the machine type, eg. x86_64 or i386>, 
model: <string, one-way non-identifable hashed model name, eg. 36f613e00f707dbe53a64b1d9625ae7d> 

If you wish to opt-out of this data collection feature, please follow the steps below:

1. Disable through code
  ``` 
  from dlr.counter.phone_home import PhoneHome
  PhoneHome.disable_feature()
  ```
2. Or, create a config file, ccm_config.json inside your DLR target directory path, i.e. python3.6/site-packages/dlr/counter/ccm_config.json, then add below format content in it, ```{ "enable_phone_home" : false } ``` 
3. Restart DLR application. 
4. Validate this feature is disabled by verifying this notification is no longer displayed, or programmatically with following command: 
    ```
    from dlr.counter.phone_home import PhoneHome 
    PhoneHome.is_enabled() # false if disabled 
    ```

Examples

We prepared several examples demonstrating how to use DLR API on different platforms

License

This library is licensed under the Apache License Version 2.0.

tvm's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tvm's Issues

avg_pool2d count_include_pad flag not set correctly in TensorRT wrapper

In tensorrt_executor.cc, the AddPooling function does not call nvinfer1::IPoolingLayer::setAverageCountExcludesPadding, which is necessary to set count_include_pad in avg_pool2d. Thus, AverageCountExcludesPadding is true by default. For models such as Inception V3 which using average pooling (for multiple frameworks, including MXNet and PyTorch), AverageCountExcludesPadding should be set to false via nvinfer1::IPoolingLayer::setAverageCountExcludesPadding since count_include_pad is set to true for Inception V3. The result is that Inception V3 produces incorrect outputs for multiple frameworks. @reminisce

ONNX Upsample model compile error

Hi!

I have a compilation problem with sagemaker neo. May I report it here?
When I try to convert an onnx graph like the one below, I get an error and cannot convert it.

graph(%input : Float(1:519168, 3:173056, 416:416, 416:1),
%basenet.slice1.0.weight : Float(64:27, 3:9, 3:3, 3:1),
%basenet.slice1.0.bias : Float(64:1),
%basenet.slice1.1.weight : Float(64:1),
%basenet.slice1.1.bias : Float(64:1),
%basenet.slice1.1.running_mean : Float(64:1),
%basenet.slice1.1.running_var : Float(64:1),
%basenet.slice1.3.weight : Float(64:576, 64:9, 3:3, 3:1),
%basenet.slice1.3.bias : Float(64:1),
%basenet.slice1.4.weight : Float(64:1),
%basenet.slice1.4.bias : Float(64:1),
%basenet.slice1.4.running_mean : Float(64:1),
%basenet.slice1.4.running_var : Float(64:1),
%basenet.slice1.7.weight : Float(128:576, 64:9, 3:3, 3:1),
%basenet.slice1.7.bias : Float(128:1),
%basenet.slice1.8.weight : Float(128:1),
%basenet.slice1.8.bias : Float(128:1),
%basenet.slice1.8.running_mean : Float(128:1),
%basenet.slice1.8.running_var : Float(128:1),
%basenet.slice1.10.weight : Float(128:1152, 128:9, 3:3, 3:1),
%basenet.slice1.10.bias : Float(128:1),
%basenet.slice1.11.weight : Float(128:1),
%basenet.slice1.11.bias : Float(128:1),
%basenet.slice1.11.running_mean : Float(128:1),
%basenet.slice1.11.running_var : Float(128:1),
%basenet.slice2.14.weight : Float(256:1152, 128:9, 3:3, 3:1),
%basenet.slice2.14.bias : Float(256:1),
%basenet.slice2.15.weight : Float(256:1),
%basenet.slice2.15.bias : Float(256:1),
%basenet.slice2.15.running_mean : Float(256:1),
%basenet.slice2.15.running_var : Float(256:1),
%basenet.slice2.17.weight : Float(256:2304, 256:9, 3:3, 3:1),
%basenet.slice2.17.bias : Float(256:1),
%basenet.slice2.18.weight : Float(256:1),
%basenet.slice2.18.bias : Float(256:1),
%basenet.slice2.18.running_mean : Float(256:1),
%basenet.slice2.18.running_var : Float(256:1),
%basenet.slice3.20.weight : Float(256:2304, 256:9, 3:3, 3:1),
%basenet.slice3.20.bias : Float(256:1),
%basenet.slice3.21.weight : Float(256:1),
%basenet.slice3.21.bias : Float(256:1),
%basenet.slice3.21.running_mean : Float(256:1),
%basenet.slice3.21.running_var : Float(256:1),
%basenet.slice3.24.weight : Float(512:2304, 256:9, 3:3, 3:1),
%basenet.slice3.24.bias : Float(512:1),
%basenet.slice3.25.weight : Float(512:1),
%basenet.slice3.25.bias : Float(512:1),
%basenet.slice3.25.running_mean : Float(512:1),
%basenet.slice3.25.running_var : Float(512:1),
%basenet.slice3.27.weight : Float(512:4608, 512:9, 3:3, 3:1),
%basenet.slice3.27.bias : Float(512:1),
%basenet.slice3.28.weight : Float(512:1),
%basenet.slice3.28.bias : Float(512:1),
%basenet.slice3.28.running_mean : Float(512:1),
%basenet.slice3.28.running_var : Float(512:1),
%basenet.slice4.30.weight : Float(512:4608, 512:9, 3:3, 3:1),
%basenet.slice4.30.bias : Float(512:1),
%basenet.slice4.31.weight : Float(512:1),
%basenet.slice4.31.bias : Float(512:1),
%basenet.slice4.31.running_mean : Float(512:1),
%basenet.slice4.31.running_var : Float(512:1),
%basenet.slice4.34.weight : Float(512:4608, 512:9, 3:3, 3:1),
%basenet.slice4.34.bias : Float(512:1),
%basenet.slice4.35.weight : Float(512:1),
%basenet.slice4.35.bias : Float(512:1),
%basenet.slice4.35.running_mean : Float(512:1),
%basenet.slice4.35.running_var : Float(512:1),
%basenet.slice4.37.weight : Float(512:4608, 512:9, 3:3, 3:1),
%basenet.slice4.37.bias : Float(512:1),
%basenet.slice4.38.weight : Float(512:1),
%basenet.slice4.38.bias : Float(512:1),
%basenet.slice4.38.running_mean : Float(512:1),
%basenet.slice4.38.running_var : Float(512:1),
%basenet.slice5.1.weight : Float(1024:4608, 512:9, 3:3, 3:1),
%basenet.slice5.1.bias : Float(1024:1),
%basenet.slice5.2.weight : Float(1024:1024, 1024:1, 1:1, 1:1),
%basenet.slice5.2.bias : Float(1024:1),
%upconv1.conv.0.weight : Float(512:1536, 1536:1, 1:1, 1:1),
%upconv1.conv.0.bias : Float(512:1),
%upconv1.conv.1.weight : Float(512:1),
%upconv1.conv.1.bias : Float(512:1),
%upconv1.conv.1.running_mean : Float(512:1),
%upconv1.conv.1.running_var : Float(512:1),
%upconv1.conv.3.weight : Float(256:4608, 512:9, 3:3, 3:1),
%upconv1.conv.3.bias : Float(256:1),
%upconv1.conv.4.weight : Float(256:1),
%upconv1.conv.4.bias : Float(256:1),
%upconv1.conv.4.running_mean : Float(256:1),
%upconv1.conv.4.running_var : Float(256:1)):
%103 : Float(1:11075584, 64:173056, 416:416, 416:1) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%input, %basenet.slice1.0.weight, %basenet.slice1.0.bias) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/conv.py:416:0
%104 : Float(1:11075584, 64:173056, 416:416, 416:1) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%103, %basenet.slice1.1.weight, %basenet.slice1.1.bias, %basenet.slice1.1.running_mean, %basenet.slice1.1.running_var) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:2016:0
%105 : Float(1:11075584, 64:173056, 416:416, 416:1) = onnx::Relu(%104) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:1117:0
%106 : Float(1:11075584, 64:173056, 416:416, 416:1) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%105, %basenet.slice1.3.weight, %basenet.slice1.3.bias) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/conv.py:416:0
%107 : Float(1:11075584, 64:173056, 416:416, 416:1) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%106, %basenet.slice1.4.weight, %basenet.slice1.4.bias, %basenet.slice1.4.running_mean, %basenet.slice1.4.running_var) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:2016:0
%108 : Float(1:11075584, 64:173056, 416:416, 416:1) = onnx::Relu(%107) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:1117:0
%109 : Float(1:2768896, 64:43264, 208:208, 208:1) = onnx::MaxPoolkernel_shape=[2, 2], pads=[0, 0, 0, 0], strides=[2, 2] # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:576:0
%110 : Float(1:5537792, 128:43264, 208:208, 208:1) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%109, %basenet.slice1.7.weight, %basenet.slice1.7.bias) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/conv.py:416:0
%111 : Float(1:5537792, 128:43264, 208:208, 208:1) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%110, %basenet.slice1.8.weight, %basenet.slice1.8.bias, %basenet.slice1.8.running_mean, %basenet.slice1.8.running_var) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:2016:0
%112 : Float(1:5537792, 128:43264, 208:208, 208:1) = onnx::Relu(%111) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:1117:0
%113 : Float(1:5537792, 128:43264, 208:208, 208:1) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%112, %basenet.slice1.10.weight, %basenet.slice1.10.bias) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/conv.py:416:0
%114 : Float(1:5537792, 128:43264, 208:208, 208:1) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%113, %basenet.slice1.11.weight, %basenet.slice1.11.bias, %basenet.slice1.11.running_mean, %basenet.slice1.11.running_var) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:2016:0
%115 : Float(1:5537792, 128:43264, 208:208, 208:1) = onnx::Relu(%114) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:1117:0
%116 : Float(1:1384448, 128:10816, 104:104, 104:1) = onnx::MaxPoolkernel_shape=[2, 2], pads=[0, 0, 0, 0], strides=[2, 2] # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:576:0
%117 : Float(1:2768896, 256:10816, 104:104, 104:1) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%116, %basenet.slice2.14.weight, %basenet.slice2.14.bias) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/conv.py:416:0
%118 : Float(1:2768896, 256:10816, 104:104, 104:1) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%117, %basenet.slice2.15.weight, %basenet.slice2.15.bias, %basenet.slice2.15.running_mean, %basenet.slice2.15.running_var) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:2016:0
%119 : Float(1:2768896, 256:10816, 104:104, 104:1) = onnx::Relu(%118) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:1117:0
%120 : Float(1:2768896, 256:10816, 104:104, 104:1) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%119, %basenet.slice2.17.weight, %basenet.slice2.17.bias) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/conv.py:416:0
%121 : Float(1:2768896, 256:10816, 104:104, 104:1) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%120, %basenet.slice2.18.weight, %basenet.slice2.18.bias, %basenet.slice2.18.running_mean, %basenet.slice2.18.running_var) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:2016:0
%122 : Float(1:2768896, 256:10816, 104:104, 104:1) = onnx::Relu(%121) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:1117:0
%123 : Float(1:2768896, 256:10816, 104:104, 104:1) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%122, %basenet.slice3.20.weight, %basenet.slice3.20.bias) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/conv.py:416:0
%124 : Float(1:2768896, 256:10816, 104:104, 104:1) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%123, %basenet.slice3.21.weight, %basenet.slice3.21.bias, %basenet.slice3.21.running_mean, %basenet.slice3.21.running_var) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:2016:0
%125 : Float(1:2768896, 256:10816, 104:104, 104:1) = onnx::Relu(%124) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:1117:0
%126 : Float(1:692224, 256:2704, 52:52, 52:1) = onnx::MaxPoolkernel_shape=[2, 2], pads=[0, 0, 0, 0], strides=[2, 2] # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:576:0
%127 : Float(1:1384448, 512:2704, 52:52, 52:1) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%126, %basenet.slice3.24.weight, %basenet.slice3.24.bias) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/conv.py:416:0
%128 : Float(1:1384448, 512:2704, 52:52, 52:1) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%127, %basenet.slice3.25.weight, %basenet.slice3.25.bias, %basenet.slice3.25.running_mean, %basenet.slice3.25.running_var) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:2016:0
%129 : Float(1:1384448, 512:2704, 52:52, 52:1) = onnx::Relu(%128) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:1117:0
%130 : Float(1:1384448, 512:2704, 52:52, 52:1) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%129, %basenet.slice3.27.weight, %basenet.slice3.27.bias) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/conv.py:416:0
%131 : Float(1:1384448, 512:2704, 52:52, 52:1) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%130, %basenet.slice3.28.weight, %basenet.slice3.28.bias, %basenet.slice3.28.running_mean, %basenet.slice3.28.running_var) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:2016:0
%132 : Float(1:1384448, 512:2704, 52:52, 52:1) = onnx::Relu(%131) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:1117:0
%133 : Float(1:1384448, 512:2704, 52:52, 52:1) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%132, %basenet.slice4.30.weight, %basenet.slice4.30.bias) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/conv.py:416:0
%134 : Float(1:1384448, 512:2704, 52:52, 52:1) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%133, %basenet.slice4.31.weight, %basenet.slice4.31.bias, %basenet.slice4.31.running_mean, %basenet.slice4.31.running_var) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:2016:0
%135 : Float(1:1384448, 512:2704, 52:52, 52:1) = onnx::Relu(%134) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:1117:0
%136 : Float(1:346112, 512:676, 26:26, 26:1) = onnx::MaxPoolkernel_shape=[2, 2], pads=[0, 0, 0, 0], strides=[2, 2] # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:576:0
%137 : Float(1:346112, 512:676, 26:26, 26:1) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%136, %basenet.slice4.34.weight, %basenet.slice4.34.bias) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/conv.py:416:0
%138 : Float(1:346112, 512:676, 26:26, 26:1) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%137, %basenet.slice4.35.weight, %basenet.slice4.35.bias, %basenet.slice4.35.running_mean, %basenet.slice4.35.running_var) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:2016:0
%139 : Float(1:346112, 512:676, 26:26, 26:1) = onnx::Relu(%138) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:1117:0
%140 : Float(1:346112, 512:676, 26:26, 26:1) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%139, %basenet.slice4.37.weight, %basenet.slice4.37.bias) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/conv.py:416:0
%141 : Float(1:346112, 512:676, 26:26, 26:1) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%140, %basenet.slice4.38.weight, %basenet.slice4.38.bias, %basenet.slice4.38.running_mean, %basenet.slice4.38.running_var) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:2016:0
%142 : Float(1:346112, 512:676, 26:26, 26:1) = onnx::MaxPoolkernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1] # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:576:0
%143 : Float(1:692224, 1024:676, 26:26, 26:1) = onnx::Conv[dilations=[6, 6], group=1, kernel_shape=[3, 3], pads=[6, 6, 6, 6], strides=[1, 1]](%142, %basenet.slice5.1.weight, %basenet.slice5.1.bias) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/conv.py:416:0
%144 : Float(1:692224, 1024:676, 26:26, 26:1) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%143, %basenet.slice5.2.weight, %basenet.slice5.2.bias) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/conv.py:416:0
%145 : Float(1:1038336, 1536:676, 26:26, 26:1) = onnx::Concat[axis=1](%144, %141) # /home/ec2-user/SageMaker/CRAFT-pytorch/craft.py:63:0
%146 : Float(1:346112, 512:676, 26:26, 26:1) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[1, 1], pads=[0, 0, 0, 0], strides=[1, 1]](%145, %upconv1.conv.0.weight, %upconv1.conv.0.bias) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/conv.py:416:0
%147 : Float(1:346112, 512:676, 26:26, 26:1) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%146, %upconv1.conv.1.weight, %upconv1.conv.1.bias, %upconv1.conv.1.running_mean, %upconv1.conv.1.running_var) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:2016:0
%148 : Float(1:346112, 512:676, 26:26, 26:1) = onnx::Relu(%147) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:1117:0
%149 : Float(1:173056, 256:676, 26:26, 26:1) = onnx::Conv[dilations=[1, 1], group=1, kernel_shape=[3, 3], pads=[1, 1, 1, 1], strides=[1, 1]](%148, %upconv1.conv.3.weight, %upconv1.conv.3.bias) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/modules/conv.py:416:0
%150 : Float(1:173056, 256:676, 26:26, 26:1) = onnx::BatchNormalization[epsilon=1.0000000000000001e-05, momentum=0.90000000000000002](%149, %upconv1.conv.4.weight, %upconv1.conv.4.bias, %upconv1.conv.4.running_mean, %upconv1.conv.4.running_var) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:2016:0
%151 : Float(1:173056, 256:676, 26:26, 26:1) = onnx::Relu(%150) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:1117:0
%152 : Tensor = onnx::Shape(%132)
%153 : Tensor = onnx::Constantvalue={2}
%154 : Long() = onnx::Gather[axis=0](%152, %153) # /home/ec2-user/SageMaker/CRAFT-pytorch/craft.py:66:0
%155 : Tensor = onnx::Shape(%132)
%156 : Tensor = onnx::Constantvalue={3}
%157 : Long() = onnx::Gather[axis=0](%155, %156) # /home/ec2-user/SageMaker/CRAFT-pytorch/craft.py:66:0
%158 : Tensor = onnx::Unsqueezeaxes=[0]
%159 : Tensor = onnx::Unsqueezeaxes=[0]
%160 : Tensor = onnx::Concat[axis=0](%158, %159)
%161 : Tensor = onnx::Constantvalue= 1 1 [ CPUFloatType{2} ]
%162 : Tensor = onnx::Castto=1
%163 : Tensor = onnx::Shape(%151)
%164 : Tensor = onnx::Sliceaxes=[0], ends=[9223372036854775807], starts=[2]
%165 : Tensor = onnx::Castto=1
%166 : Tensor = onnx::Div(%162, %165)
%167 : Tensor = onnx::Concat[axis=0](%161, %166)
%168 : Float(1:692224, 256:2704, 52:52, 52:1) = onnx::Upsample[mode="linear"](%151, %167) # /home/ec2-user/anaconda3/envs/pytorch_latest_p36/lib/python3.6/site-packages/torch/nn/functional.py:3163:0
return (%168)

I'm getting the following error on AWS

ClientError: InputConfiguration: TVM cannot convert ONNX model. Please make sure the framework you selected is correct. <class 'tvm.tir.expr.Any'> has no attribute value

When I tried it on Oct 9, 2020, there was no problem even with Upsample, but as of Nov 12, 2020, the error occurs.
The same thing happens with Resize

Flaky CI in task_python_docs.sh

Thanks for participating in the TVM community! We use https://discuss.tvm.ai for any general usage questions and discussions. The issue tracker is used for actionable items such as feature proposals discussion, roadmaps, and bug tracking. You are always welcomed to post on the forum first :)

Issues that are inactive for a period of time may get closed. We adopt this policy so that we won't lose track of actionable issues that may fall at the bottom of the pile. Feel free to reopen a new one if you feel there is an additional problem that needs attention when an old one gets closed.

For bug reports, to help the developer act on the issues, please include a description of your environment, preferably a minimum script to reproduce the problem.

For feature proposals, list clear, small actionable items so we can track the progress of the change.

When attempting to merge #88, noticed some flaky CI issues. The CI would hang when generating docs for tutorial from_caffe2.py. When running the test script locally in the CI container (v0.56 for neoai/ci-gpu), I saw that it would pass but also have some other errors occasionally which could mean a sphinx issue. It is likely that updating to v0.60 in the recent merge could solve the issues.

CI build failure

https://neo-ai-ci.amazon-ml.com/blue/organizations/jenkins/tvm/detail/PR-3/12/pipeline

+ docker/bash.sh tvmai/ci-i386 ./tests/scripts/task_build.sh build -j2

WORKSPACE: /home/ubuntu/workspace/tvm/build-i386

DOCKER CONTAINER NAME: tvmai/ci-i386



Running './tests/scripts/task_build.sh build -j2' inside tvmai/ci-i386...

docker

mesg: ttyname failed: Inappropriate ioctl for device

Adding group `ubuntu' (GID 1000) ...

Done.

-- The C compiler identification is GNU 5.4.0

-- The CXX compiler identification is GNU 5.4.0

-- Check for working C compiler: /usr/bin/cc

-- Check for working C compiler: /usr/bin/cc -- works

-- Detecting C compiler ABI info

-- Detecting C compiler ABI info - done

-- Detecting C compile features

-- Detecting C compile features - done

-- Check for working CXX compiler: /usr/bin/c++

-- Check for working CXX compiler: /usr/bin/c++ -- works

-- Detecting CXX compiler ABI info

-- Detecting CXX compiler ABI info - done

-- Detecting CXX compile features

-- Detecting CXX compile features - done

CMake Error at CMakeLists.txt:45 (getFromList):

  Unknown CMake command "getFromList".





-- Configuring incomplete, errors occurred!

See also "/workspace/build/CMakeFiles/CMakeOutput.log".

ANTLR cmake build failure

https://neo-ai-ci.amazon-ml.com/blue/organizations/jenkins/tvm/detail/PR-7/3/pipeline/36/

+ docker/bash.sh tvmai/ci-cpu ./tests/scripts/task_build.sh build -j2

WORKSPACE: /home/ubuntu/workspace/tvm/build-cpu

DOCKER CONTAINER NAME: tvmai/ci-cpu



Running './tests/scripts/task_build.sh build -j2' inside tvmai/ci-cpu...

docker

mesg: ttyname failed: Inappropriate ioctl for device

Adding group `ubuntu' (GID 1000) ...

Done.

-- The C compiler identification is GNU 5.4.0

-- The CXX compiler identification is GNU 5.4.0

-- Check for working C compiler: /usr/bin/cc

-- Check for working C compiler: /usr/bin/cc -- works

-- Detecting C compiler ABI info

-- Detecting C compiler ABI info - done

-- Detecting C compile features

-- Detecting C compile features - done

-- Check for working CXX compiler: /usr/bin/c++

-- Check for working CXX compiler: /usr/bin/c++ -- works

-- Detecting CXX compiler ABI info

-- Detecting CXX compiler ABI info - done

-- Detecting CXX compile features

-- Detecting CXX compile features - done

-- Performing Test SUPPORT_CXX11

-- Performing Test SUPPORT_CXX11 - Success

-- Build with RPC support...

-- Build with Graph runtime support...

-- Build with Graph runtime debug support...

-- Build VTA runtime with target: sim

-- Use llvm-config=llvm-config-4.0

-- /usr/lib/llvm-4.0/include

-- Found LLVM_INCLUDE_DIRS=/usr/lib/llvm-4.0/include

-- Found LLVM_DEFINITIONS= -DNDEBUG -D_GNU_SOURCE -D__STDC_CONSTANT_MACROS -D__STDC_FORMAT_MACROS -D__STDC_LIMIT_MACROS

-- Found TVM_LLVM_VERSION=40

-- Build with LLVM 

-- Set TVM_LLVM_VERSION=40

CMake Error at cmake/modules/ANTLR.cmake:12 (list):

  list GET given empty list

Call Stack (most recent call first):

  CMakeLists.txt:190 (include)





-- Build with contrib.sort

-- Build with contrib.hybriddump

-- Configuring incomplete, errors occurred!

See also "/workspace/build/CMakeFiles/CMakeOutput.log".

See also "/workspace/build/CMakeFiles/CMakeError.log".

task_python_vta.py failed

https://neo-ai-ci.amazon-ml.com/blue/organizations/jenkins/tvm/detail/PR-7/7/pipeline/36

+ docker/bash.sh tvmai/ci-cpu ./tests/scripts/task_python_vta.sh

WORKSPACE: /home/ubuntu/workspace/tvm/build-cpu

DOCKER CONTAINER NAME: tvmai/ci-cpu



Running './tests/scripts/task_python_vta.sh' inside tvmai/ci-cpu...

docker

mesg: ttyname failed: Inappropriate ioctl for device

Adding group `ubuntu' (GID 1000) ...

Done.

cd python; python setup.py build_ext --inplace

/usr/local/lib/python2.7/dist-packages/Cython/Compiler/Main.py:367: FutureWarning: Cython directive 'language_level' not set, using 2 for now (Py2). This will change in a later release! File: /workspace/python/tvm/_ffi/_cython/core.pyx

  tree = Parsing.p_module(s, pxd, full_module_name)

Compiling tvm/_ffi/_cython/core.pyx because it changed.

[1/1] Cythonizing tvm/_ffi/_cython/core.pyx

/usr/lib/python2.7/dist-packages/setuptools/dist.py:285: UserWarning: Normalizing '0.5.dev' to '0.5.dev0'

  normalized_version,

running build_ext

building 'tvm._ffi._cy2.core' extension

creating build

creating build/temp.linux-x86_64-2.7

creating build/temp.linux-x86_64-2.7/tvm

creating build/temp.linux-x86_64-2.7/tvm/_ffi

creating build/temp.linux-x86_64-2.7/tvm/_ffi/_cython

x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -fno-strict-aliasing -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -fPIC -I../include/ -I../3rdparty/dmlc-core/include -I../3rdparty/dlpack/include -I/usr/include/python2.7 -c tvm/_ffi/_cython/core.cpp -o build/temp.linux-x86_64-2.7/tvm/_ffi/_cython/core.o

cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++

creating build/lib.linux-x86_64-2.7

creating build/lib.linux-x86_64-2.7/tvm

creating build/lib.linux-x86_64-2.7/tvm/_ffi

creating build/lib.linux-x86_64-2.7/tvm/_ffi/_cy2

c++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -fno-strict-aliasing -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security -Wl,-Bsymbolic-functions -Wl,-z,relro -Wdate-time -D_FORTIFY_SOURCE=2 -g -fstack-protector-strong -Wformat -Werror=format-security build/temp.linux-x86_64-2.7/tvm/_ffi/_cython/core.o -o build/lib.linux-x86_64-2.7/tvm/_ffi/_cy2/core.so

copying build/lib.linux-x86_64-2.7/tvm/_ffi/_cy2/core.so -> tvm/_ffi/_cy2

cd python; python3 setup.py build_ext --inplace

running build_ext

building 'tvm._ffi._cy3.core' extension

creating build/temp.linux-x86_64-3.6

creating build/temp.linux-x86_64-3.6/tvm

creating build/temp.linux-x86_64-3.6/tvm/_ffi

creating build/temp.linux-x86_64-3.6/tvm/_ffi/_cython

x86_64-linux-gnu-gcc -pthread -DNDEBUG -g -fwrapv -O2 -Wall -Wstrict-prototypes -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -I../include/ -I../3rdparty/dmlc-core/include -I../3rdparty/dlpack/include -I/usr/include/python3.6m -c tvm/_ffi/_cython/core.cpp -o build/temp.linux-x86_64-3.6/tvm/_ffi/_cython/core.o

cc1plus: warning: command line option '-Wstrict-prototypes' is valid for C/ObjC but not for C++

creating build/lib.linux-x86_64-3.6

creating build/lib.linux-x86_64-3.6/tvm

creating build/lib.linux-x86_64-3.6/tvm/_ffi

creating build/lib.linux-x86_64-3.6/tvm/_ffi/_cy3

x86_64-linux-gnu-g++ -pthread -shared -Wl,-O1 -Wl,-Bsymbolic-functions -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,-Bsymbolic-functions -Wl,-z,relro -g -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 build/temp.linux-x86_64-3.6/tvm/_ffi/_cython/core.o -o build/lib.linux-x86_64-3.6/tvm/_ffi/_cy3/core.cpython-36m-x86_64-linux-gnu.so

copying build/lib.linux-x86_64-3.6/tvm/_ffi/_cy3/core.cpython-36m-x86_64-linux-gnu.so -> tvm/_ffi/_cy3

/usr/local/lib/python3.6/dist-packages/setuptools/dist.py:398: UserWarning: Normalizing '0.5.dev' to '0.5.dev0'

  normalized_version,

Running unittest...

Failure: NameError (name 'avg_pool2d_alter_layout' is not defined) ... ERROR

Failure: ImportError (cannot import name cpp) ... ERROR



======================================================================

ERROR: Failure: NameError (name 'avg_pool2d_alter_layout' is not defined)

----------------------------------------------------------------------

Traceback (most recent call last):

  File "/usr/local/lib/python2.7/dist-packages/nose/loader.py", line 418, in loadTestsFromName

    addr.filename, addr.module)

  File "/usr/local/lib/python2.7/dist-packages/nose/importer.py", line 47, in importFromPath

    return self.importFromDir(dir_path, fqname)

  File "/usr/local/lib/python2.7/dist-packages/nose/importer.py", line 94, in importFromDir

    mod = load_module(part_fqname, fh, filename, desc)

  File "/workspace/vta/tests/python/unittest/test_environment.py", line 1, in <module>

    import vta

  File "/workspace/vta/python/vta/__init__.py", line 20, in <module>

    from . import top

  File "/workspace/vta/python/vta/top/__init__.py", line 3, in <module>

    from .vta_conv2d import packed_conv2d, schedule_packed_conv2d

  File "/workspace/vta/python/vta/top/vta_conv2d.py", line 8, in <module>

    import topi

  File "/workspace/topi/python/topi/__init__.py", line 26, in <module>

    from . import cuda

  File "/workspace/topi/python/topi/cuda/__init__.py", line 14, in <module>

    from .pooling import schedule_pool, schedule_global_pool

  File "/workspace/topi/python/topi/cuda/pooling.py", line 136, in <module>

    @avg_pool2d_alter_layout.register(["cuda"])

NameError: name 'avg_pool2d_alter_layout' is not defined



======================================================================

ERROR: Failure: ImportError (cannot import name cpp)

----------------------------------------------------------------------

Traceback (most recent call last):

  File "/usr/local/lib/python2.7/dist-packages/nose/loader.py", line 418, in loadTestsFromName

    addr.filename, addr.module)

  File "/usr/local/lib/python2.7/dist-packages/nose/importer.py", line 47, in importFromPath

    return self.importFromDir(dir_path, fqname)

  File "/usr/local/lib/python2.7/dist-packages/nose/importer.py", line 94, in importFromDir

    mod = load_module(part_fqname, fh, filename, desc)

  File "/workspace/vta/tests/python/unittest/test_vta_insn.py", line 4, in <module>

    import topi

  File "/workspace/topi/python/topi/__init__.py", line 16, in <module>

    from . import cpp

ImportError: cannot import name cpp



----------------------------------------------------------------------

Ran 2 tests in 0.237s



FAILED (errors=2)

Terminated

script returned exit code 255

Support ONNX Operators: ScatterND and Range of YoloV5

Hi,

I use YoloV5 to ONNX. Then want to use the SageMaker Neo Compile ONNX.
I got the following error messages:

ClientError: OperatorNotImplemented:('The following operators are not supported for frontend ONNX: Range, ScatterND")

You can repeat the error via this notebook link

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.