Coder Social home page Coder Social logo

arbucheli / walkthrough-multi-device-plugin-and-the-devcloud Goto Github PK

View Code? Open in Web Editor NEW
1.0 1.0 0.0 55 KB

Demonstration showing you how to request an edge node with an Intel i5 CPU and load a model on the CPU, GPU, and VPU (Intel® Neural Compute Stick 2) at the same time using the Multi Device Plugin on Udacity's workspace integration with Intel's DevCloud.

Python 100.00%

walkthrough-multi-device-plugin-and-the-devcloud's Introduction

WALKTHROUGH-MULTI-DEVICE-PLUGIN-AND-THE-DEVCLOUD

This Content was Created by Intel Edge AI for IoT Developers UDACITY Nanodegree.

This notebook is a demonstration showing you how to request an edge node with an Intel i5 CPU and load a model on the CPU, GPU, and VPU (Intel® Neural Compute Stick 2) at the same time using the Multi Device Plugin on Udacity's workspace integration with Intel's DevCloud.

Below are the six steps we'll walk through in this notebook:

  1. Creating a Python script to load the model
  2. Creating a job submission script
  3. Submitting a job using the qsub command
  4. Checking the job status using the liveQStat function
  5. Retrieving the output files using the getResults function
  6. Viewing the resulting output

IMPORTANT: Set up paths so we can run Dev Cloud utilities You must run this every time you enter a Workspace session.


%env PATH=/opt/conda/bin:/opt/spark-2.4.3-bin-hadoop2.7/bin:/opt/conda/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/intel_devcloud_support
import os
import sys
sys.path.insert(0, os.path.abspath('/opt/intel_devcloud_support'))
sys.path.insert(0, os.path.abspath('/opt/intel'))

The Model

We will be using the vehicle-license-plate-detection-barrier-0106 model for this exercise.

Remember to use the appropriate model precisions for each device:

* IGPU - <code>FP16</code>
* VPU - <code>FP16</code>
* CPU - It is prefered to use <code>FP32</code>, but we have to use <code>FP16</code> since <strong>GPU</strong> and <strong>VPU</strong> use<code>FP16</code>

The model has already been downloaded for you in the /data/models/intel directory on Intel's DevCloud.

We will be running inference on an image of a car. The path to the image is /data/resources/car.png.

$ Step 1: Creating a Python Script The first step is to create a Python script that you can use to load the model and perform an inference. I have used the %%writefile magic command to create a Python file called load_model_to_device.py. This will create a new Python file in the working directory.

Note: The advantage of using the Multi device plugin is that it does not require us to change our application code. So we will be using the same Python script we used in the previous VPU walkthrough.


%%writefile load_model_to_device.py

import time
from openvino.inference_engine import IENetwork
from openvino.inference_engine import IEPlugin
import argparse

def main(args):
    model=args.model_path
    model_weights=model+'.bin'
    model_structure=model+'.xml'
    
    start=time.time()
    model=IENetwork(model_structure, model_weights)

    plugin = IEPlugin(device=args.device)
    
    net = plugin.load(network=model, num_requests=1)
    print(f"Time taken to load model = {time.time()-start} seconds")

if __name__=='__main__':
    parser=argparse.ArgumentParser()
    parser.add_argument('--model_path', required=True)
    parser.add_argument('--device', default=None)
    
    args=parser.parse_args() 
    main(args)

Step 2: Creating a Job Submission Script

To submit a job to the DevCloud, we need to create a shell script. Similar to the Python script above, I have used the %%writefile magic command to create a shell script called load_multi_model_job.sh.

This script does a few things.

  1. Writes stdout and stderr to their respective .log files
  2. Creates the /output directory
  3. Creates DEVICE and MODELPATH variables and assigns their value as the first and second argument passed to the shell script
  4. Calls the Python script using the MODELPATH and DEVICE variable values as the command line argument
  5. Changes to the /output directory
  6. Compresses the stdout.log and stderr.log files to output.tgz

Note: Just like our Python script, our job submission script also does not need to change when using the Multi device plugin. Step 3, where we submit our job to the DevCloud, is where we have to make a minor change.


%%writefile load_multi_model_job.sh

exec 1>/output/stdout.log 2>/output/stderr.log

mkdir -p /output

DEVICE=$1
MODELPATH=$2

# Run the load model python script
python3 load_model_to_device.py  --model_path ${MODELPATH} --device ${DEVICE}

cd /output

tar zcvf output.tgz stdout.log stderr.log

Step 3: Submitting a Job to Intel's DevCloud

The code below will submit a job to an IEI Tank-870 edge node with the following three devices:

  • Intel Core i5 6500TE
  • Intel HD Graphics 530
  • Intel Neural Compute Stick 2

Note: We'll pass in a device type argument of MULTI:MYRIAD,GPU,CPU to load our model on all three devices at the same time. We'll need to use FP16 as the model precision since we're loading our model on a GPU and VPU even though the recommended model precison is FP32 for CPU.

The !qsub command takes a few command line arguments:

  1. The first argument is the shell script filename - load_multi_model_job.sh. This should always be the first argument.
  2. The -d flag designates the directory where we want to run our job. We'll be running it in the current directory as denoted by ..
  3. The -l flag designates the node and quantity we want to request. The default quantity is 1, so the 1 after nodes is optional.
  4. The -F flag let's us pass in a string with all command line arguments we want to pass to our Python script.

Note: There is an optional flag, -N, you may see in a few exercises. This is an argument that only works on Intel's DevCloud that allows you to name your job submission. This argument doesn't work in Udacity's workspace integration with Intel's DevCloud.

In the cell below, we assign the returned value of the !qsub command to a variable job_id_core. This value is an array with a single string.

Once the cell is run, this queues up a job on Intel's DevCloud and prints out the first value of this array below the cell, which is the job id.


job_id_core = !qsub load_multi_model_job.sh -d . -l nodes=1:tank-870:i5-6500te:intel-hd-530:intel-ncs2 -F
"MULTI:MYRIAD,GPU,CPU /data/models/intel/vehicle-license-plate-detection-barrier-0106/FP16/vehicle-license-plate-detection-barrier-0106" -N store_core 
print(job_id_core[0])

Step 4: Running liveQStat

Running the liveQStat function, we can see the live status of our job. Running the this function will lock the cell and poll the job status 10 times.

  • Q status means our job is currently awaiting an available node
  • R status means our job is currently running on the requested node

Note: In the demonstration, it is pointed out that W status means your job is done. This is no longer accurate. Once a job has finished running, it will no longer show in the list when running the liveQStat function.


import liveQStat
liveQStat.liveQStat()

Step 5: Retrieving Output Files

In this step, we'll be using the getResults function to retrieve our job's results. This function takes a few arguments.

  1. job id - This value is stored in the job_id_core variable we created during Step 3. Remember that this value is an array with a single string, so we access the string value using job_id_core[0].
  2. filename - This value should match the filename of the compressed file we have in our load_multi_model_job.sh shell script. In this example, filename shoud be set to output.tgz.
  3. blocking - This is an optional argument and is set to False by default. If this is set to True, the cell is locked while waiting for the results to come back. There is a status indicator showing the cell is waiting on results.

Note: The getResults function is unique to Udacity's workspace integration with Intel's DevCloud. When working on Intel's DevCloud environment, your job's results are automatically retrieved and placed in your working directory.


import get_results
get_results.getResults(job_id_core[0], filename="output.tgz", blocking=True)

Step 6: Viewing the Outputs

In this step, we unpack the compressed file using !tar zxf and read the contents of the log files by using the !cat command.

stdout.log should contain the printout of the print statement in our Python script.

!tar zxf output.tgz !cat stdout.log !cat stderr.log

Adaptation as a Repository: Andrés R. Bucheli.

walkthrough-multi-device-plugin-and-the-devcloud's People

Contributors

arbucheli avatar

Stargazers

 avatar

Watchers

 avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.