Coder Social home page Coder Social logo

cavusmustafa / openvino_tensorflow Goto Github PK

View Code? Open in Web Editor NEW

This project forked from openvinotoolkit/openvino_tensorflow

0.0 0.0 0.0 16.81 MB

OpenVINO™ integration with TensorFlow

License: Other

Shell 2.13% C++ 65.26% Python 26.96% C 0.05% HTML 0.32% CMake 3.46% Dockerfile 1.83%

openvino_tensorflow's Introduction

English | 简体中文

OpenVINO™ integration with TensorFlow

This repository contains the source code of OpenVINO™ integration with TensorFlow, designed for TensorFlow* developers who want to get started with OpenVINO™ in their inferencing applications. TensorFlow* developers can now take advantage of OpenVINO™ toolkit optimizations with TensorFlow inference applications across a wide range of Intel® compute devices by adding just two lines of code.

import openvino_tensorflow
openvino_tensorflow.set_backend('<backend_name>')

This product delivers OpenVINO™ inline optimizations which enhance inferencing performance with minimal code modifications. OpenVINO™ integration with TensorFlow accelerates inference across many AI models on a variety of Intel® silicon such as:

  • Intel® CPUs
  • Intel® integrated GPUs
  • Intel® Movidius™ Vision Processing Units - referred to as VPU
  • Intel® Vision Accelerator Design with 8 Intel Movidius™ MyriadX VPUs - referred to as VAD-M or HDDL

[Note: For maximum performance, efficiency, tooling customization, and hardware control, we recommend the developers to adopt native OpenVINO™ APIs and its runtime.]

Installation

Prerequisites

  • Ubuntu 18.04, 20.04, macOS 11.2.3 or Windows1 10 - 64 bit
  • Python* 3.7, 3.8 or 3.9
  • TensorFlow* v2.9.2

1Windows package supports only Python3.9

Check our Interactive Installation Table for a menu of installation options. The table will help you configure the installation process.

The OpenVINO™ integration with TensorFlow package comes with pre-built libraries of OpenVINO™ version 2022.2.0. The users do not have to install OpenVINO™ separately. This package supports:

  • Intel® CPUs

  • Intel® integrated GPUs

  • Intel® Movidius™ Vision Processing Units (VPUs)

      pip3 install -U pip
      pip3 install tensorflow==2.9.2
      pip3 install openvino-tensorflow==2.2.0
    

For installation instructions on Windows please refer to OpenVINO™ integration with TensorFlow for Windows

To use Intel® integrated GPUs for inference, make sure to install the Intel® Graphics Compute Runtime for OpenCL™ drivers

To leverage Intel® Vision Accelerator Design with Movidius™ (VAD-M) for inference, install OpenVINO™ integration with TensorFlow alongside the Intel® Distribution of OpenVINO™ Toolkit.

For more details on installation please refer to INSTALL.md, and for build from source options please refer to BUILD.md

Configuration

Once you've installed OpenVINO™ integration with TensorFlow, you can use TensorFlow* to run inference using a trained model.

To see if OpenVINO™ integration with TensorFlow is properly installed, run

python3 -c "import tensorflow as tf; print('TensorFlow version: ',tf.__version__);\
            import openvino_tensorflow; print(openvino_tensorflow.__version__)"

This should produce an output like:

    TensorFlow version:  2.9.2
    OpenVINO integration with TensorFlow version: b'2.2.0'
    OpenVINO version used for this build: b'2022.2.0'
    TensorFlow version used for this build: v2.9.2
    CXX11_ABI flag used for this build: 1

By default, Intel® CPU is used to run inference. However, you can change the default option to either Intel® integrated GPU or Intel® VPU for AI inferencing. Invoke the following function to change the hardware on which inferencing is done.

openvino_tensorflow.set_backend('<backend_name>')

Supported backends include 'CPU', 'GPU', 'GPU_FP16', 'MYRIAD', and 'VAD-M'.

To determine what processing units are available on your system for inference, use the following function:

openvino_tensorflow.list_backends()

For further performance improvements, it is advised to set the environment variable OPENVINO_TF_CONVERT_VARIABLES_TO_CONSTANTS=1. For more API calls and environment variables, see USAGE.md.

Examples

To see what you can do with OpenVINO™ integration with TensorFlow, explore the demos located in the examples directory.

Docker Support

Dockerfiles for Ubuntu* 18.04, Ubuntu* 20.04, and TensorFlow* Serving are provided which can be used to build runtime Docker* images for OpenVINO™ integration with TensorFlow on CPU, GPU, VPU, and VAD-M. For more details see docker readme.

Prebuilt Images

Try it on Intel® DevCloud

Sample tutorials are also hosted on Intel® DevCloud. The demo applications are implemented using Jupyter Notebooks. You can interactively execute them on Intel® DevCloud nodes, compare the results of OpenVINO™ integration with TensorFlow, native TensorFlow and OpenVINO™.

License

OpenVINO™ integration with TensorFlow is licensed under Apache License Version 2.0. By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.

Support

Submit your questions, feature requests and bug reports via GitHub issues.

How to Contribute

We welcome community contributions to OpenVINO™ integration with TensorFlow. If you have an idea for improvement:

We will review your contribution as soon as possible. If any additional fixes or modifications are necessary, we will guide you and provide feedback. Before you make your contribution, make sure you can build OpenVINO™ integration with TensorFlow and run all the examples with your fix/patch. If you want to introduce a large feature, create test cases for your feature. Upon our verification of your pull request, we will merge it to the repository provided that the pull request has met the above mentioned requirements and proved acceptable.


* Other names and brands may be claimed as the property of others.

openvino_tensorflow's People

Contributors

avijit-nervana avatar sayantan-nervana avatar kanvi-nervana avatar adk9 avatar sindhu-nervana avatar crlishka avatar bani-intelaipg avatar jianyinglang avatar shresthamalik avatar mingshan-wang avatar icarusmj12 avatar vishakha-nervana avatar ck-intel avatar sspintel avatar chaseadams509 avatar ashahba avatar sharathns93 avatar bhadur avatar kbalka avatar cavusmustafa avatar suryasidd avatar rrajore avatar earhart avatar saipj avatar indie avatar jbobba avatar maxbubula avatar dmsuehir avatar dawnstone avatar yeonsily avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.