Coder Social home page Coder Social logo

marcelojpaiva / workshop-customvisionaitools Goto Github PK

View Code? Open in Web Editor NEW

This project forked from globalaicommunity/workshop-customvisionaitools

0.0 0.0 0.0 206.55 MB

Workshop Custom Vision AI - Predict what kind of tools you have in your toolkit with Custom Vision AI

License: MIT License

workshop-customvisionaitools's Introduction

Workshop: Azure Custom Vision - Find the Right Tool

Session Information

Session Title: Creating applications that can see, hear, speak or understand - using Microsoft Cognitive Services

Session Abstract: In this workshop you will be introduced to the Microsoft Azure Cognitive Services, a range of offerings you can use to infuse intelligence and machine learning into your applications without needing to build the code from scratch. We will cover pre-trained AI APIs, such as computer vision, that are accessed by REST protocol. Next we will dive into Custom AI that uses transfer learning - Microsoft Azure Custom Vision. This enables you to provide a small amount of your own data to train an image classification model. Wrapping the workshop up by building our custom trained AI into an application - using Logic Apps, this technology is ideal for building data pipeline processes that work with your machine learning models.

Pre-requisites for your machine

  • Clone this repository to your local machine to gain images and code samples you need for the demos: git clone https://github.com/GlobalAICommunity/Workshop-CustomVisionAITools.git or choose 'Clone or Download' green button and then 'Download ZIP'
  • Azure Pass or Microsoft Azure Subscription
  • Laptop with a modern web browser (Google Chrome, Microsoft Edge)
  • Postman, API Development Environment - available on Windows, Linux and macOS

All demos and content have been tested on a Windows PC, however all options should run from macOS and Linux machines as well. Please provide information via an issue or pull request if you have feedback on other operating systems

Sections

  • Task 0: Microsoft Azure Cognitive Services - Computer Vision Go to Section
  • Task 1: Microsoft Azure Cognitive Services - Custom Vision Go to Section
  • Task 2: Build Custom AI into an Application - Azure Logic Apps Go to Section

Task 0: Microsoft Azure Cognitive Services - Computer Vision

Microsoft Azure Cognitive Services contain some pre-built models for the most typical tasks, such as object detection in pictures, speech recognition and synthesis, sentiment analysis and so on. Let us test the Computer Vision API service to see if it can recognize some specific objects in one particular problem domain: construction.

Suppose we need to create an application that recognizes 5 types of tools:

  • Drills
  • Hammers
  • Hard hats
  • Pliers
  • Screwdrivers

We will use tool images sourced from WikiMedia Commons, used for the Ignite Tours.

In this repository, we have provided two sets of images for each of the 5 classes above:

  • training files that we will use for training our own custom model later
  • test images, which we will use to evaluate the model

Let us start by looking at how pre-trained Computer Vision cognitive service can see our images:

  • Go to the home page of Computer Vision Service
  • Scroll down to See it in action section
  • Upload one of the gear pictures from our dataset by clicking Browse button, or provide URL of the picture directly from GitHub
  • Observe how the image has been classified:

Computer Vision Results

While some of the objects (such as helmets or drills) can be recognized by the pre-trained model, more specialized objects (like pliers, or even hammers) are not determined correctly.

Task 1: Microsoft Azure Cognitive Services - Custom Vision

Using Microsoft Azure Custom Vision service you can start to build your own personalised image classification and object detection algorithms with very little code. In this exercise we will create a tool classification algorithm using

Create Resource Group

First create a Resource Group.

  • Go to the Azure Portal main dashboard.
  • Click 'Create a Resource' in the top left
  • Search for 'Resource group'
  • Enter details to create:
    • A name for the resource group
    • Select the location
    • Click Create

Resource Group Details

Create Custom Vision instance

Now create a Custom Vision instance in your Azure account.

  • Go to your created Resource group
  • Click +Add
  • Search for Custom Vision
  • Click Create
  • Enter details to create:
    • A name for the service
    • Select your subscription
    • Select the data centre location (in this example West Europe, but you can select your own region)
    • Choose the S0 tier for both 'Prediction pricing tier' and Training pricing tier. F0 is possible, but gives you an error with the logic app part.
    • Select your created Resource group and make sure it is in the same data centre location (in this case 'globalaibootcamp' in West Europe
    • Click Create

Build Classifier

Now we can build our classifier, navigate to https://www.customvision.ai and choose sign in. Sign in with your Azure credentials account.

Accept the terms and conditions box to continue.

Create Project

Once loaded choose 'New Project' which opens a window to enter details:

  • Name: choose a suitable name
  • Description: add a description of the classifier (example shown in image below)
  • Resource Group: choose the resource group you created your custom vision service in (example: workshop[SO])
  • Project Types: Classification
  • Classification Types: Multiclass (Single tag per image)
  • Domains: Retail (compact)
  • Export Capabilities: Basic platforms

Create Custom Vision Project

Click on 'Create Project' and you will land on an empty workspace.

Add Images

Now you can start adding images and assigning them tags to create our image classifier.

  • In the top left, select 'Add images', browse for the first folder of images from the training data - Drills - and select all the images in the folder.

  • Add the tag 'drills' to the drills images and select 'Upload files'

Once successful, you receive a confirmation message and you should see that your images are now available in the workspace.

Upload images of drills

Now complete the same steps of uploading and tagging images for the other 4 tool categories in the folder. For each type of tool:

  • Click 'Add images'
  • Select all the tool images
  • Add the class label (hard hat, pliers, etc.)
  • Choose upload
  • Confirm images uploaded into the workspace

Now you should have all categories uploaded and on the left hand side you can see your tool classes and you can filter depending on type of tool image.

Train Model

Now you are ready to train your algorithm on the tool image data you have uploaded. Select the green 'Train' button in the top right corner. For this demo, you can use the "Fast Training" option.

Once the training process is complete it will take you to the Performance tab. Here you will receive machine learning evaluation metrics for your model. Here you algo get information regarding the class imbalance, as some tools had less images than others.

Evaluation Metrics

Test Model

Now you have a model, you need to test the model. Choose the 'Quick Test' button in the top right (next to the train button) this will open a window where you can browse for a local image or enter a web URL.

Browse for an image in the test folder (images the model have not been trained on) and upload this image. The image will be analysed and a result returned of what tool the model thinks it is (prediction tag) and the models confidence of its result (prediction probability).

Quick Test

Repeat this process for other image in the test folder, or search online for other images to see how the model performs.

Retrain Model

If you click on the 'Predictions' tab on the top toolbar - you should see all the test images you have submitted. This section is for re-training, as you get new data you can add this to your model to improve its performance. The images are ordered by importance - the image, which if classified correctly, will add the most new information to the model is listed first. Whereas the last image might be very similar to other images already learnt by the model so this is less important to classify correctly.

To add these images to the model - select the first image, review the results the model provided and then in the 'My Tags' box enter the correct tag and click 'save and close'.

Add Re-training Tag

This image will disappear from the your predictions workspace and be added to the training images workspace. Once you add a few new images and tags you can re-train the model to see if there are improvements.

Publish Model

To use this model within applications you need the prediction details. Therefore, you have to go to the Performance tab from the top bar, click the Publish button.

Prediction model

Please provide a name for your model and select the Prediction resource, and click on Publish. Please take notice of you Publication Resource, which you need in the second task.

Prediction model resource

You can now select the Prediction URL button to gain all information you need to create a Postman call to your API.

Prediction model URL

Use Model with Postman

Open Postman and create a new collection. Postman Collection

Now create a new request. Postman Request

You can use the prior info to set the URL, the Header and the Body (using both an image or an image URL):

Postman Header

Postman Body

Now click on Send... What kind of tool did you upload?

Great work! you have created your specialised tool classification model using the Azure Custom Vision Service

Task 2: Build Custom AI into an Application - Azure Logic Apps

In this section you will build an Azure Logic App to consume your Custom Vision AI tool classification application.

First you need to create two Azure Storage Accounts.

Create Storage accounts

Go to the prior created Resource group on the Azure portal and click "Add" to create a new resource in the top left corner. Select the section Storage and choose the first option Storage Account.

Azure Storage Account

We are going to create two storage accounts:

  • one for the images to be dropped into to be processed (called globalaistor)
  • another for the results after processing to be uploaded to (called globalairesult)

Complete the process below twice so you have two storage accounts in total

On the storage account creation page enter options to setup your storage accounts:

  • Subscription: choose your subscription
  • Resource Group: choose the resource group you have been using for this workshop (e.g. globalaibootcamp)
  • Storage Account Name: (must be unique) enter an all lowercase storage account name. Such as globalaistor(yourname) or globalairesult(yourname) - append your name to the end of the storage account name so you know its unique (remove the brackets)
  • Location: your closest data center (in this case West Europe)
  • Performance: Standard
  • Account Kind: Blob Storage
  • Replication: Locally-redundant storage (LRS)
  • Access Tier: Hot

Select Review + create, confirm validation is passed and then select Create

Create Azure Storage Account

Once your deployment is complete, got to the resource and review the account settings. Select Containers to review your empty blob storage account.

Select Blob Storage Account

We need to add a container to the storage account to store our images and results.

Select the + Container button and create a name for the container

an example for the globalaistor account would be images an example for the globalairesult account would be results

For the public access level setting select Container (anonymous read access for containers and blobs) Create a container

Complete the above for an image storage account and a results storage account with the same settings

Create Logic App

Now we will create a Logic App - this will connect your image storage account to your AI classification service and put the results in your results storage account

Head to the Azure Portal Homepage. You are going to use Event Grid, a service that detects triggers in an Azure subscription (in our case, when a new blob is created in your Azure Storage account). Before you build with this - you must register it.

Navigate to Subscriptions, select your subscription and find Resource Providers in the left pane. If it's not in left panel, select "All services" and find in here. Once the resource providers are listed - search "event" and select Microsoft.EventGrid.

If this is not already status registered, select register from the toolbar.

Register Event Grid selection

Once registered with a green tick - go back to your Resource Group. Select Add. Type Logic App and select the service.

Create the logic app by entering some setup detail like below:

  • Name: suitable name for the tool classification application
  • Subscription: Choose your subscription
  • Resource Group: (use existing e.g. globalaibootcamp) select the resource group you have been working for the whole workshop
  • Location: choose the data center closest to you
  • Log Analytics: off

Choose Create

Logic App

Once created, go to resource. From here we can create our logic process. Select Logic app designer from the left menu and then the When an Event Grid resource event occurs option.

Logic App Trigger

Connect to Azure event grid by signing in using your Azure credentials.

Event Grid Sign In

Once connected and you see a green tick, select continue.

Select the options below:

  • Subscription: your subscription
  • Resource Type: Microsoft.Storage.StorageAccounts
  • Resource Name: choose your image storage account (e.g. globalaistor)
  • Event Type Item - 1: Microsoft.Storage.BlobCreated

Event Grid Options

Then choose +New step. Type Parse JSON and select the parse JSON operator as part of the data Data Operations category.

  • Content: select the box and from the Dynamic Content box on the right, select Body
  • Schema: select this box and enter the JSON schema provided in the logic-app-schema file, created by Amy Boyd.

Parse JSON

Then choose next step. Type Custom Vision and select the Classify an image URL (preview) as below.

Custom vision - classify image url

First, you have to create the Custom Vision Connection.

  • Connection Name: Give your connection a name
  • Prediction Key: Use the prediction key from your model (you can find this information under your model settings of the Custom Vision webpage) NOTE: make sure you use the right key
  • URL: Endpoint of the prediction service

Custom vision - create connection

Custom vision - create connection

Now you need to fill in the details of the Custom Vision process:

  • Project ID: Find the project ID from the settings logo in the top right of the Custom Vision webpage
  • Published Name: You can find the published name from the performance tab in the Custom Vision service, under "Published as"
  • Image URL: select the input box and on the right side select URL from Parse JSON outputs

Get URL for image

Click on +New step.

Type For each and select the grey control step called "For each". Once selected in the output from previous step box, select the box and from Dynamic content select Predictions from the Parse JSON step.

For each prediction

Choose Add an action. Search Control, select the control icon and then from the results, select Condition

If Statement

In the Condition box, select choose a value. From Dynamic content find 'Classify an image url' and then Prediction Probability

Set the condition to be Prediction Probability greater than 0.7 (as shown below), as we only want to save results with a probability of 0.7 or higher.

Condition value

In the If True box select Add an action.

Now you want to store that value into your prior created blob storage. Therefore, search for Azure Blob Storage and select the icon for Create Blob. In connection name enter results and select your results blob storage account name from the listed options and select Create.

Connect to Result Blob Storage

In folder path, select the folder icon, far right, and choose the container name you created that is populated.

Select the Blob name field and enter: result-(then from the Dynamic content box select id under Classify an image url)

Under Blob Content, select the field and in the Dynamic Content box on the right, select see more under Classify an image url. Then select tagName, enter a colon ":" and then select probability

Azure Blob Storage results options

Finally you are ready to save the Logic App in the top action bar. Once saved, let's test the app for the desired outcome. Select Run from the top action bar.

Test your Logic App

Now navigate to your images storage account (easy to find from the resource group section). Choose Containers and select the images container. In there you should see an upload button. Upload one of the images from the tools data testset folder.

Upload Blob

Once uploaded, navigate back to your Logic App main page and review the runs history section at the bottom of the page. Select the successful run and review the inputs and outputs.

Run History

All sections should have a green tick and you can select each one to view the input and output between the layers (this is also a great way to debug if it doesn't run as expected).

Logic app run successful

Finally navigate to your results blob storage account, select blob, enter the results container and review the file now created there. The contents of the file should show similar to the below - given the dog image input, the predicted class of the tool and also a confidence score:

Result

Clean up resources

Finally, If you don't expect to need these resources in the future, you can delete them by deleting the resource group. To do so, select the resource group for this workshop, select Delete, then confirm the name of the resource group to delete.

workshop-customvisionaitools's People

Contributors

mdragt avatar amynic avatar shwars avatar hnky avatar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.