Coder Social home page Coder Social logo

yolov8-with-azureml's Introduction

Train Yolov8 with AzureML

This repository provides an example showing how to train the yolov8 model with the az cli or the python SDK.

az cli

You can find the detailed instructions to train the yolov8 model with the az cli in this document.

Azure machine learning python SDK

Here is a notebook showing how to train the yolov8 model with the python SDK.

Deploy model for inference

Register the model from the workspace UI

You can register the model resulting from a training job. Go to your job Overview and select "Register model". Select model of type Unspecified type enable "Show all default outputs" > and select best.pt. (Note that your training environment needs azureml-mlflow==1.52.0 and mlflow==2.4.2 to enable mlflow logging and being able to retrieve the model)

Create the deployment

In azureml/deployment.yaml, specify your model

You can either specify a registered model.

model: azureml:<your-model-name>:<version>

Or specify the relative path of a local .pt file:

model:
  path: <model-relative-path-to-azureml-folder>

Note that you might need to increase the request_timeout_ms by specifying it in the deployment.yaml if running your inference takes time

Deploy your model for inference

To deploy your endpoint in your azureml workspace:

Configure your default resource group and azureml workspace:

az configure --defaults group=$YOUR_RESOURCE_GROUP workspace=$YOUR_AZ_ML_WORKSPACE
./deploy-endpoint.sh

Note your endpoint name and score uri (you can retrieve them from the azure workspace).

Test the endpoint and allocate traffic

To be able invoke our endpoint with an http client, you need to allocate traffic to your endpoint. (For more information see this doc)

az ml online-endpoint show -n $ENDPOINT_NAME --query traffic

You can see that 0% is allocated to the blue deployment, so let's allocate 100% traffic to our unique blue deployment:

az ml online-endpoint update --name $ENDPOINT_NAME --traffic "blue=100"

Now you should be able to call your endpoint with curl. You need to retrieve your endpoint key from the azure ml workspace in Endpoints > Consume > Basic consumption info.

ENDPOINT_KEY=$YOUR_ENDPOINT_KEY
curl --request POST "$SCORING_URI" --header "Authorization: Bearer $ENDPOINT_KEY" --header 'Content-Type: application/json' --data '{"image_url": "https://ultralytics.com/images/bus.jpg"}'

yolov8-with-azureml's People

Contributors

ouphi avatar skslalom avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar

Forkers

skslalom

yolov8-with-azureml's Issues

How can I use your work for yolov5 model deployment

HI,

Thanks for your work it looks really simple to understand and well-managed.
I am trying to deploy my custom-trained yolov5 model on azure and I found so many issues with the different methods I saw on the internet. can you guide me on how I can use your work to deploy my yolov5 model and what changes I need to make to it? also in the deployment.yaml file line 5, do I need to write it like (azureml: model_name:1) or (azureml:model_name:1).

Thanks a lot for your work

where can i find yolov8 prediction result?

Thank you so much for your work. It is extremely useful for me.

I am a newbie to Azure ML, and I have been learning step by step following your tutorial. I have trained a model. Now I want to use it to predict some images. I want to know how to obtain the image with prediction, which is the image marked with bounding boxes, using the Python SDK or command line. I tried to make predictions using the code below, but I can't find the saved image. Sincerely appreciate your inputs.

from azure.ai.ml import command
from azure.ai.ml import Input

job3 = command(
    code="training-code",
    command="""
        python ts.py 
    """,
    environment="azureml:yolov8-environment:1",
    compute="CPU-mini",
    display_name="yolov8-predict",
    experiment_name="yolov8-experiment"
)
ts.py:

from ultralytics import YOLO

# Load a model
model = YOLO('best.pt')  # pretrained YOLOv8n model

# Run batched inference on a list of images
results = model(['test.png'],save=True,project="xxx", name="xxx")  # return a list of Results objects

Here is the output and log:

0: 352x640 2 poles, 1 vegetation_encroachment, 73.5ms
Speed: 5.0ms preprocess, 73.5ms inference, 1.6ms postprocess per image at shape (1, 3, 352, 640)
Results saved to �[1m/xxx/xxx�[0m

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.