Coder Social home page Coder Social logo

icywang86rui / sagemaker-tensorflow-serving-container Goto Github PK

View Code? Open in Web Editor NEW

This project forked from aws/sagemaker-tensorflow-serving-container

0.0 1.0 1.0 4.61 MB

A TensorFlow Serving solution for use in SageMaker.

License: Apache License 2.0

Shell 25.64% Python 63.49% JavaScript 10.87%

sagemaker-tensorflow-serving-container's Introduction

SageMaker TensorFlow Serving Container

SageMaker TensorFlow Serving Container is an a open source project that builds docker images for running TensorFlow Serving on Amazon SageMaker.

This documentation covers building and testing these docker images.

For information about using TensorFlow Serving on SageMaker, see: Deploying to TensorFlow Serving Endpoints in the SageMaker Python SDK documentation.

For notebook examples, see: Amazon SageMaker Examples.

Table of Contents

  1. Getting Started
  2. Building your image
  3. Running the tests

Getting Started

Prerequisites

Make sure you have installed all of the following prerequisites on your development machine:

For testing, you will also need:

To test GPU images locally, you will also need:

Note: Some of the build and tests scripts interact with resources in your AWS account. Be sure to set your default AWS credentials and region using aws configure before using these scripts.

Building your image

Amazon SageMaker uses Docker containers to run all training jobs and inference endpoints.

The Docker images are built from the Dockerfiles in docker/.

The Dockerfiles are grouped based on the version of TensorFlow Serving they support. Each supported processor type (e.g. "cpu", "gpu", "ei") has a different Dockerfile in each group.

To build an image, run the ./scripts/build.sh script:

./scripts/build.sh --version 1.11 --arch cpu
./scripts/build.sh --version 1.11 --arch gpu
./scripts/build.sh --version 1.11 --arch eia

If your are testing locally, building the image is enough. But if you want to your updated image in SageMaker, you need to publish it to an ECR repository in your account. The ./scripts/publish.sh script makes that easy:

./scripts/publish.sh --version 1.11 --arch cpu
./scripts/publish.sh --version 1.11 --arch gpu
./scripts/publish.sh --version 1.11 --arch eia

Note: this will publish to ECR in your default region. Use the --region argument to specify a different region.

Running your image in local docker

You can also run your container locally in Docker to test different models and input inference requests by hand. Standard docker run commands (or nvidia-docker run for GPU images) will work for this, or you can use the provided start.sh and stop.sh scripts:

./scripts/start.sh [--version x.xx] [--arch cpu|gpu|eia|...]
./scripts/stop.sh [--version x.xx] [--arch cpu|gpu|eia|...]

When the container is running, you can send test requests to it using any HTTP client. Here's and an example using the curl command:

curl -X POST --data-binary @test/resources/inputs/test.json \
     -H 'Content-Type: application/json' \ 
     -H 'X-Amzn-SageMaker-Custom-Attributes: tfs-model-name=half_plus_three' \ 
     http://localhost:8080/invocations

Additional curl examples can be found in ./scripts/curl.sh.

Running the tests

The package includes automated tests and code checks. The tests use Docker to run the container image locally, and do not access resources in AWS. You can run the tests and static code checkers using tox:

tox

To test against Elastic Inference with Accelerator, you will need an AWS account, publish your built image to ECR repository and run the following command:

tox -e py36 -- test/integration/sagemaker/test_ei.py
    [--repo <ECR_repository_name>]
    [--instance-types <instance_type>,...]
    [--accelerator-type <accelerator_type>]
    [--versions <version>,...]

For example:

tox -e py36 -- test/integration/sagemaker/test_ei.py \
    --repo sagemaker-tensorflow-serving-eia \
    --instance_type ml.m5.xlarge \
    --accelerator-type ml.eia1.medium \
    --versions 1.12.0

Contributing

Please read CONTRIBUTING.md for details on our code of conduct, and the process for submitting pull requests to us.

License

This library is licensed under the Apache 2.0 License.

sagemaker-tensorflow-serving-container's People

Contributors

andremoeller avatar chuyang-deng avatar jesterhazy avatar

Watchers

 avatar

Forkers

mirocody

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.