Coder Social home page Coder Social logo

azure / aml-deploy Goto Github PK

View Code? Open in Web Editor NEW
41.0 9.0 16.0 159 KB

GitHub Action that allows you to deploy machine learning models in Azure Machine Learning.

License: MIT License

Dockerfile 0.34% Shell 0.11% Python 99.56%
azure-machine-learning aml azure mlops machine-learning data-science

aml-deploy's Introduction

Integration Test Lint and Test

GitHub Action for deploying Machine Learning Models to Azure

Deprecation notice

This Action is deprecated. Instead, consider using the CLI (v2) to manage and interact with Azure Machine Learning endpoints and deployments in GitHub Actions.

Important: The CLI (v2) is not recommended for production use while in preview.

Usage

The Deploy Machine Learning Models to Azure action will deploy your model on Azure Machine Learning using GitHub Actions.

Get started today with a free Azure account!

This repository contains GitHub Action for deploying Machine Learning Models to Azure Machine Learning and creates a real-time endpoint on the model to integrate models in other systems. The endpoint can be hosted either on an Azure Container Instance or on an Azure Kubernetes Service.

This GitHub Action also allows you to provide a python script that executes tests against the Webservice endpoint after the model deployment has completed successfully. You can enable tests by setting the parameter test_enabled to true. In addition to that, you have to provide a python script (default code/test/test.py) which includes a function (default def main(webservice):) that describes your tests that you want to execute against the service object. The python script gets the webservice object injected. The action fails, if the test script fails.

Dependencies on other GitHub Actions

  • Checkout Checkout your Git repository content into GitHub Actions agent.
  • aml-workspace This action requires an Azure Machine Learning workspace to be present. You can either create a new one or re-use an existing one using the action.
  • aml-registermodel Before deploying the model, you need to register the model with Azure Machine Learning. If not already registered, you can use this action and use its output in deploy action.
  • aml-compute You don't need this if you want to host your endpoint on an ACI instance. But, if you want to host your endpoint on an AKS cluster, you can manage the AKS Cluster via the action.

Create Azure Machine Learning and deploy an machine learning model using GitHub Actions

This action is one in a series of actions that can be used to setup an ML Ops process. We suggest getting started with one of our template repositories, which will allow you to create an ML Ops process in less than 5 minutes.

  1. Simple template repository: ml-template-azure

    Go to this template and follow the getting started guide to setup an ML Ops process within minutes and learn how to use the Azure Machine Learning GitHub Actions in combination. This template demonstrates a very simple process for training and deploying machine learning models.

  2. Advanced template repository: mlops-enterprise-template

    This template demonstrates how approval processes can be included in the process and how training and deployment workflows can be splitted. It also shows how workflows (e.g. deployment) can be triggered from pull requests. More enhancements will be added to this template in the future to make it more enterprise ready.

Example workflow

name: My Workflow
on: [push, pull_request]
jobs:
  build:
    runs-on: ubuntu-latest
    steps:
    # Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
    - name: Check Out Repository
      id: checkout_repository
      uses: actions/checkout@v2

    # AML Workspace Action
    - uses: Azure/aml-workspace@v1
      id: aml_workspace
      with:
        azure_credentials: ${{ secrets.AZURE_CREDENTIALS }}
    
    # AML Register Model Action
    - uses: Azure/aml-registermodel@v1
      id: aml_registermodel
      with:
        azure_credentials: ${{ secrets.AZURE_CREDENTIALS }}
        run_id:  "<your-run-id>"
        experiment_name: "<your-experiment-name>"
    
    # Deploy model in Azure Machine Learning to ACI
    - name: Deploy model
      id: aml_deploy
      uses: Azure/aml-deploy@v1
      with:
        # required inputs
        azure_credentials: ${{ secrets.AZURE_CREDENTIALS }}
        model_name:  ${{ steps.aml_registermodel.outputs.model_name }}
        model_version: ${{ steps.aml_registermodel.outputs.model_version }}
        # optional inputs
        parameters_file: "deploy.json"

Inputs

Input Required Default Description
azure_credentials x - Output of az ad sp create-for-rbac --name <your-sp-name> --role contributor --scopes /subscriptions/<your-subscriptionId>/resourceGroups/<your-rg> --sdk-auth. This should be stored in your secrets
model_name x - Name of the model that will be deployed. You will get it as an output of register model action as in above example workflow.
model_version x - Version of the model that will be deployed. You will get it as an output of register model action as in above example workflow.
parameters_file "deploy.json" We expect a JSON file in the .cloud/.azure folder in root of your repository specifying your model deployment details. If you have want to provide these details in a file other than "deploy.json" you need to provide this input in the action.

azure_credentials ( Azure Credentials )

Azure credentials are required to connect to your Azure Machine Learning Workspace. These may have been created for an action you are already using in your repository, if so, you can skip the steps below.

Install the Azure CLI on your computer or use the Cloud CLI and execute the following command to generate the required credentials:

# Replace {service-principal-name}, {subscription-id} and {resource-group} with your Azure subscription id and resource group name and any name for your service principle
az ad sp create-for-rbac --name {service-principal-name} \
                         --role contributor \
                         --scopes /subscriptions/{subscription-id}/resourceGroups/{resource-group} \
                         --sdk-auth

This will generate the following JSON output:

{
  "clientId": "<GUID>",
  "clientSecret": "<GUID>",
  "subscriptionId": "<GUID>",
  "tenantId": "<GUID>",
  (...)
}

Add this JSON output as a secret with the name AZURE_CREDENTIALS in your GitHub repository.

parameters_file (Parameters File)

The action tries to load a JSON file in the .cloud/.azure folder in your repository, which specifies details for the model deployment to your Azure Machine Learning Workspace. By default, the action expects a file with the name deploy.json. If your JSON file has a different name, you can specify it with this parameter. Note that none of these values are required and, in the absence, defaults will be created with a combination of the repo name and branch name.

A sample file can be found in this repository in the folder .cloud/.azure. There are separate parameters that are used for the ACI deployment, the AKS deployment and some that are common for both deployment options.

Common parameters
Parameter Required Allowed Values Default Description
name str <REPOSITORY_NAME>-<BRANCH_NAME> The name to give the deployed service. Must be unique to the workspace, only consist of lowercase letters, numbers, or dashes, start with a letter, and be between 3 and 32 characters long.
deployment_compute_target (for AKS deployment) str null Name of the compute target to deploy the webservice to. As Azure Container Instances has no associated ComputeTarget, leave this parameter as null to deploy to Azure Container Instances.
inference_source_directory str "code/deploy/" The path to the folder that contains all files to create the image.
inference_entry_script str "score.py" The path to a local file in your repository that contains the code to run for the image and score the data. This path is relative to the specified source directory. The python script has to define an init and a run function. A sample can be found in the template repositories.
conda_file str "environment.yml" The path to a local file in your repository containing a conda environment definition to use for the image. This path is relative to the specified source directory.
extra_docker_file_steps str null The path to a local file in your repository containing additional Docker steps to run when setting up image. This path is relative to the specified source directory.
enable_gpu bool false Indicates whether to enable GPU support in the image. The GPU image must be used on Microsoft Azure Services such as Azure Container Instances, Azure Machine Learning Compute, Azure Virtual Machines, and Azure Kubernetes Service.
cuda_version str "9.1" if enable_gpu is set to true The Version of CUDA to install for images that need GPU support. The GPU image must be used on Microsoft Azure Services such as Azure Container Instances, Azure Machine Learning Compute, Azure Virtual Machines, and Azure Kubernetes Service. Supported versions are 9.0, 9.1, and 10.0.
runtime str: "python" or "spark-py" "python" The runtime to use for the image.
custom_base_image str null A custom Docker image to be used as base image. If no base image is given then the base image will be used based off of given runtime parameter.
model_data_collection_enabled bool false Whether or not to enable model data collection for this Webservice.
authentication_enabled bool false for ACI, true for AKS Whether or not to enable key auth for this Webservice.
app_insights_enabled bool false Whether or not to enable Application Insights logging for this Webservice.
cpu_cores float: ]0.0, inf[ 0.1 The number of CPU cores to allocate for this Webservice. Can be a decimal.
memory_gb float: ]0.0, inf[ 0.5 The amount of memory (in GB) to allocate for this Webservice. Can be a decimal.
delete_service_after_deployment bool false Indicates whether the service gets deleted after the deployment completed successfully.
tags dict: {"": "", ...} null Dictionary of key value tags to give this Webservice.
properties dict: {"": "", ...} Dictionary of key value properties to give this Webservice. These properties cannot be changed after deployment, however new key value pairs can be added.
description str null A description to give this Webservice and image.
test_enabled bool false Whether to run tests for this model deployment and the created real-time endpoint.
test_file_path str "code/test/test.py" Path to the python script in your repository in which you define your own tests that you want to run against the webservice endpoint. The GitHub Action fails, if your script fails.
test_file_function_name str "main" Name of the function in your python script in your repository in which you define your own tests that you want to run against the webservice endpoint. The function gets the webservice object injected and allows you to run tests against the scoring uri. The GitHub Action fails, if your script fails.
profiling_enabled bool false Whether or not to profile this model for an optimal combination of cpu and memory. To use this functionality, you also have to provide a model profile dataset (profiling_dataset). If the parameter is not specified, the Action will try to use the sample input dataset that the model was registered with. Please, note that profiling is a long running operation and can take up to 25 minutes depending on the size of the dataset. More details can be found here.
profiling_dataset str null Name of the dataset that should be used for model profiling.
skip_deployment bool false Indicates whether the deployment to ACI or AKS should be skipped. This can be used in combination with create_image to only create a Docker image that can be used for further deployment.
create_image str: "docker", "function_blob", "function_http" or "function_service_bus_queue" null Indicates whether a Docker image should be created which can be used for further deployment.

Please visit this website and this website for more details.

ACI specific parameters

ACI is the default deployment resource. A sample file for an aci deployment can be found in the .cloud/.azure folder.

Parameter Required Allowed Values Default Description
location str: supported region workspace location The Azure region to deploy this Webservice to.
ssl_enabled bool false Whether or not to enable SSL for this Webservice.
ssl_cert_pem_file str null A file path to a file containing cert information for SSL validation. Must provide all three CName, cert file, and key file to enable SSL validation.
ssl_key_pem_file str null A file path to a file containing key information for SSL validation. Must provide all three CName, cert file, and key file to enable SSL validation.
ssl_cname str null A CName to use if enabling SSL validation on the cluster. Must provide all three CName, cert file, and key file to enable SSL validation.
dns_name_label str null The DNS name label for the scoring endpoint. If not specified a unique DNS name label will be generated for the scoring endpoint.

Please visit this website for more details.

AKS Deployment

For the deployment of the model to AKS, you must configure an AKS resource and specify the name of the AKS cluster with the deployment_compute_target parameter. Additional parameters allow you to finetune your deployment on AKS with options like autoscaling and the liveness probe requirements. These will be set to default parameters if not provided.

Parameter Required Allowed Values Default Description
gpu_cores int: [0, inf[ 1 The number of GPU cores to allocate for this Webservice.
autoscale_enabled bool true if num_replicas is null Whether to enable autoscale for this Webservice.
autoscale_min_replicas int: [1, inf[ 1 The minimum number of containers to use when autoscaling this Webservice.
autoscale_max_replicas int: [1, inf[ 10 The maximum number of containers to use when autoscaling this Webservice.
autoscale_refresh_seconds int: [1, inf[ 1 How often the autoscaler should attempt to scale this Webservice (in seconds).
autoscale_target_utilization int: [1, 100] 70 The target utilization (in percent out of 100) the autoscaler should attempt to maintain for this Webservice.
scoring_timeout_ms int: [1, inf[ 60000 A timeout in ms to enforce for scoring calls to this Webservice.
replica_max_concurrent_requests int: [1, inf[ 1 The number of maximum concurrent requests per replica to allow for this Webservice. Do not change this setting from the default value of 1 unless instructed by Microsoft Technical Support or a member of Azure Machine Learning team.
max_request_wait_time int: [0, inf[ 500 The maximum amount of time a request will stay in the queue (in milliseconds) before returning a 503 error.
num_replicas int null The number of containers to allocate for this Webservice. No default, if this parameter is not set then the autoscaler is enabled by default.
period_seconds int: [1, inf[ 10 How often (in seconds) to perform the liveness probe.
initial_delay_seconds int: [1, inf[ 310 The number of seconds after the container has started before liveness probes are initiated.
timeout_seconds int: [1, inf[ 1 The number of seconds after which the liveness probe times out.
success_threshold int: [1, inf[ 1 The minimum consecutive successes for the liveness probe to be considered successful after having failed.
failure_threshold int: [1, inf[ 3 When a Pod starts and the liveness probe fails, Kubernetes will try failureThreshold times before giving up.
namespace str null The Kubernetes namespace in which to deploy this Webservice: up to 63 lowercase alphanumeric ('a'-'z', '0'-'9') and hyphen ('-') characters. The first and last characters cannot be hyphens.
token_auth_enabled bool false Whether to enable Token authentication for this Webservice. If this is enabled, users can access this Webservice by fetching an access token using their Azure Active Directory credentials.

Please visit this website for more details. More Information on autoscaling parameters can be found here and for liveness probe here.

Outputs

Output Description
service_scoring_uri Scoring URI of the webservice that was created (only provided if delete_service_after_deployment is set to False).
service_swagger_uri Swagger Uri of the webservice that was created (only provided if delete_service_after_deployment is set to False).
acr_address The DNS name or IP address (e.g. myacr.azurecr.io) of the Azure Container Registry (ACR) (only provided if create_image is not None).
acr_username The username for ACR (only provided if create_image is not None).
acr_password The password for ACR (only provided if create_image is not None).
package_location Full URI of the docker image (e.g. myacr.azurecr.io/azureml/azureml_*) (only provided if create_image is not None).
profiling_details Dictionary of details of the model profiling result. This will only be provided, if the model profiling method is used and successfully executed.

Environment variables

Certain parameters are considered secrets and should therefore be passed as environment variables from your secrets, if you want to use custom values.

Environment variable Required Allowed Values Default Description
CONTAINER_REGISTRY_ADRESS str null The DNS name or IP address of the Azure Container Registry (ACR). Required, if you specified a custom_base_image that is only available in your ACR.
CONTAINER_REGISTRY_USERNAME str null The username for ACR. Required, if you specified a custom_base_image that is only available in your ACR.
CONTAINER_REGISTRY_PASSWORD str null The password for ACR. Required, if you specified a custom_base_image that is only available in your ACR.
PRIMARY_KEY str null A primary auth key to use for this Webservice. If not specified, Azure will automatically assign a key.
SECONDARY_KEY str null A secondary auth key to use for this Webservice. If not specified, Azure will automatically assign a key.
CMK_VAULT_BASE_URL str null Customer managed Key Vault base url. This value is ACI specific.
CMK_KEY_NAME str null Customer managed key name. This value is ACI specific.
CMK_KEY_VERSION str null Customer managed key version. This value is ACI specific.

Other Azure Machine Learning Actions

  • aml-workspace - Connects to or creates a new workspace
  • aml-compute - Connects to or creates a new compute target in Azure Machine Learning
  • aml-run - Submits a ScriptRun, an Estimator or a Pipeline to Azure Machine Learning
  • aml-registermodel - Registers a model to Azure Machine Learning
  • aml-deploy - Deploys a model and creates an endpoint for the model

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

aml-deploy's People

Contributors

ashishonce avatar awmatheson avatar lostmygithubaccount avatar marvinbuss avatar microsoftopensource avatar pulkitaggarwl avatar vivishno avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

aml-deploy's Issues

Response Code: 502 - Content: b"'str' object has no attribute 'tolist'"

when i tried running this code i got this error, kindly assist on it
import json

input_payload = json.dumps({
'data': [[34.927778, 0.24, 7.3899, 83, 16.1000, 1016.51, 1]],
'method': 'predict' # If you have a classification model, you can get probabilities by changing this to 'predict_proba'.
})

output = service.run(input_payload)

print(output)

error :

Received bad response from service. More information can be found by calling .get_logs() on the webservice object.
Response Code: 502
Headers: {'Connection': 'keep-alive', 'Content-Length': '38', 'Content-Type': 'text/html; charset=utf-8', 'Date': 'Tue, 26 Apr 2022 05:35:50 GMT', 'Server': 'nginx/1.10.3 (Ubuntu)', 'X-Ms-Request-Id': 'd84f9266-f41c-4587-a3e9-56b482cccc80', 'X-Ms-Run-Function-Failed': 'True'}
Content: b"'str' object has no attribute 'tolist'"


WebserviceException Traceback (most recent call last)
in
7 })
8
----> 9 output = service.run(input_payload)
10
11 print(output)

/anaconda/envs/azureml_py38/lib/python3.8/site-packages/azureml/core/webservice/aci.py in run(self, input_data)
398 return resp.json()
399 else:
--> 400 raise WebserviceException('Received bad response from service. More information can be found by calling '
401 '.get_logs() on the webservice object.\n'
402 'Response Code: {}\n'

WebserviceException: WebserviceException:
Message: Received bad response from service. More information can be found by calling .get_logs() on the webservice object.
Response Code: 502
Headers: {'Connection': 'keep-alive', 'Content-Length': '38', 'Content-Type': 'text/html; charset=utf-8', 'Date': 'Tue, 26 Apr 2022 05:35:50 GMT', 'Server': 'nginx/1.10.3 (Ubuntu)', 'X-Ms-Request-Id': 'd84f9266-f41c-4587-a3e9-56b482cccc80', 'X-Ms-Run-Function-Failed': 'True'}
Content: b"'str' object has no attribute 'tolist'"
InnerException None
ErrorResponse
{
"error": {
"message": "Received bad response from service. More information can be found by calling .get_logs() on the webservice object.\nResponse Code: 502\nHeaders: {'Connection': 'keep-alive', 'Content-Length': '38', 'Content-Type': 'text/html; charset=utf-8', 'Date': 'Tue, 26 Apr 2022 05:35:50 GMT', 'Server': 'nginx/1.10.3 (Ubuntu)', 'X-Ms-Request-Id': 'd84f9266-f41c-4587-a3e9-56b482cccc80', 'X-Ms-Run-Function-Failed': 'True'}\nContent: b"'str' object has no attribute 'tolist'""
}
}

Uncomplete sentence in patterns

@marvinbuss the syntax {"IS_SENT_START": False, "OP": "*"} works only for words that come before the word in the pattern right? what about the sentences where we can't make any assumptions about how the sentence will end but still want to match as soon as the sentence starts with a certain set of words. One example: "I thought I share our newest features with you" Now the Words we match is "I thought" and "our". Now do I need to add something to make sure it matches even though there are words coming after the words in pattern3?

` def _you_centered21(
self,
rule_id="R0061",
description="Put your recipient in the center of your sentence. Simply replace the I focused sentence with a You focused one.That's how you catch their attention.",
info="Try to be very precise on what you want and use the present form if possible.",
example_pos="You can find a guide on how you can benefit from the most recent product update attached.",
example_neg="I thought I share our newest features with you"
) -> None:
""" Check for negative and positive tone.

    rule_id (str): The ID of the rule.
    description (str): The description of the rule.
    info (str): Additional info about the rule.
    example_pos (str): Positive example for this rule.
    example_neg (str): Negative example for this rule.
    """
    # Define matcher
    matcher = Matcher(self.nlp.vocab)

    # Define patterns to match
    pattern1 = [
        {"LOWER": "I"}
    ]

    pattern2 = [
        {"LOWER": "thought"}
    ]

    pattern3 = [
        {"LOWER": "our"} ,{"IS_SENT_START": False, "OP": "*"}
    ]


    # Add the patterns to the matcher and apply the matcher to the doc
    matcher.add("positivetone_pattern", [pattern1, pattern2, pattern3])
    matches = matcher(self.doc)

    # Merge the matched intervals
    intervals = [(start, end) for match_id, start, end in matches]
    merged = merge_intervals(intervals)

    # Add recommendations
    if len(merged) > 0:
        self._add_recommendation(
            rule_id=rule_id,
            description=description,
            info=info,
            example_pos=example_pos,
            example_neg=example_neg,
            token_intervals=merged
        )`

Bug in AML Python SDK leads to problems in creating images in AML related ACR

This is in relation to #6 & #8
With the creation of an image into the AML Workspace related ACR, multiple Repos per image per aml-deploy call will be created.
It seems, that calling Model.package(...) is the core reason. (create_image related)
Maybe you could state this somewhere in the documentation or implement a workaround if there is low effort.
Here an example of my ACR repo list:
azureml-repos-failure

Can not load the model in score.py

Hi,

I have a registered model that I want to deploy, but when trying to load it with:

in score.py:

import tensorflow.keras.models import load_model
model_path = os.path.join(os.getenv("AZUREML_MODEL_DIR"), 
                            "outputs", "best_model")
model = load_model(model_path)

I get the error that saved_model.pb is not found in the folder. If you look in registered model/Artifacts/ the file saved_model.pb exists

raise IOError("SavedModel file does not exist at: %s/{%s|%s}" %
OSError: SavedModel file does not exist at: /var/azureml-app/azureml-models/p8_model/27/outputs/best_model/{saved_model.pbtxt|saved_model.pb}

What can be the issue here?

I just realized that azureml copies the model folder hierarchy inconsistently from one deploy to another. How to know in what folder the final model will be? (For one deploy it was outputs/best_model, for another it was without outputs folder)

Regards,
Voka

docker image created is not directly consumable

the docker image creation option is available but once image is created and pushed into the container registry, if you try to deploy the image, it does not work, reason is image is just the model package image and does not contain the web server code which comes from mcr.microsoft.com/azureml/aci-init-container
and mcr.microsoft.com/azureml-aci-frontend container.
these containers are deployed along with model package by Azure ML when ACI deployment is selected. It also requires a lot of environment params written by Azure ml

now user can not use currently available actions to deploy this image, because those all details are hidden.

can we provide a mechanism for user to deploy these images?

Handling service name exception internally

is user gives service name as which does not confirm to Azure restrictions for naming. Should we make the whole name as lowercase?

##[error]Model deployment failed with exception: WebserviceException:
Message: Error, provided service name is invalid. It must only consist of lowercase letters, numbers, or dashes, start with a
letter, end with a letter or number, and be between 3 and 32 characters long.
InnerException None
ErrorResponse


"error": ***
    "message": "Error, provided service name is invalid. It must only consist of lowercase letters, numbers, or dashes, start with a letter, end with a letter or number, and be between 3 and 32 characters long."
***

input given : "testACIService"

SSLError: HTTPSConnectionPool when registering ML model on Azure

Hi Everyone,

Please I have my Databricks instance in a VNET. I’m trying to deploy my Machine learning model using the Azure ML workspace on Azure Container Instance (ACI).

I’m able to create an ML workspace. I get an SSLERROR when I try to register the Model using Model.register().

Using this code -

`from azureml.core import Workspace
from azureml.core.model import Model
import azureml.core
from azureml.core.workspace import Workspace
from azureml.core.model import Model

from azureml.core import Workspace
ws = Workspace.create(name='myworkspace',
subscription_id='mysub_id',
resource_group='myresourcegroup',
location='eastus'
)

model_reg = Model.register(model_path = “./model_dir”,
model_name = "ModelX",
workspace = ws)
`

Find below the error when I try to deploy my model.

SSL Error:
SSLError: HTTPSConnectionPool(host='eastus.experiments.azureml.net', port=443): Max retries exceeded with url: /discovery (Caused by SSLError(SSLError("bad handshake: SysCallError(104, 'ECONNRESET')")))
Please note only the Azure Databricks is in a VNET on Azure. How do I resolve it and deploy my model as a webservice on ACI.

Thank you.

Could not load model with provided details

Hi,

I am encountering the below error when I am trying to deploy a model in ACI.

Traceback (most recent call last):
  File "/code/main.py", line 318, in <module>
    main()
  File "/code/main.py", line 129, in main
    raise AMLConfigurationException(f"Could not load model with provided details: ***exception***")
utils.AMLConfigurationException: Could not load model with provided details: WebserviceException:
	Message: ModelNotFound: Model with name porto_seguro_safe_driver_model, version 14 not found in provided workspace
	InnerException None
	ErrorResponse 
***
    "error": ***
        "message": "ModelNotFound: Model with name porto_seguro_safe_driver_model, version 14 not found in provided workspace"
    ***
***

Here is my GitHub Action yaml.

      - name: Deploy Model to ACI
        id: deploy_aci
        uses: Azure/aml-deploy@v1
        with:
          azure_credentials: ${{ secrets.AZURE_CREDENTIALS }}
          model_name:  "${{ env.MODEL_NAME }}"
          model_version: "${{ env.MODEL_VERSION }}"
          parameters_file: "deploy_aci.json"

I verified that I have these models in my AML.

ka@Azure:~$ az ml model list -o table
Name                            Version    Framework    CreatedTime
------------------------------  ---------  -----------  -------------------
porto_seguro_safe_driver_model  14         Custom       2020-11-20T15:16:19
porto_seguro_safe_driver_model  13         Custom       2020-11-19T17:25:36
porto_seguro_safe_driver_model  12         Custom       2020-11-19T16:39:02
porto_seguro_safe_driver_model  11         Custom       2020-11-19T16:15:37
porto_seguro_safe_driver_model  10         Custom       2020-09-18T17:25:55
porto_seguro_safe_driver_model  9          Custom       2020-09-04T15:55:04
porto_seguro_safe_driver_model  8          Custom       2020-09-03T22:40:08
porto_seguro_safe_driver_model  7          Custom       2020-09-03T21:47:30
porto_seguro_safe_driver_model  6          Custom       2020-09-03T16:11:49
porto_seguro_safe_driver_model  5          Custom       2020-09-03T14:08:11
porto_seguro_safe_driver_model  4          Custom       2020-08-28T21:22:35
porto_seguro_safe_driver_model  3          Custom       2020-08-28T20:38:17
driver_model.pkl                2          Custom       2020-08-28T20:17:31
porto_seguro_safe_driver_model  2          Custom       2020-08-25T02:49:24
porto_seguro_safe_driver_model  1          Custom       2020-08-25T02:21:00
driver_model.pkl                1          Custom       2020-08-19T18:04:23

Any suggestion on how I should go about this error?

Add model profiling as feature

Model profiling would be a nice feature to add. Alternatively, we could create a new Action for this feature to simplify the Action.

Single word functions valid?

Hi @marvinbuss , I noticed there are functions with single words ergo only one pattern. Will the model detect sentences with the words or does the function only work when there are sentences with one word. I inserted a function for you to check.

` def _positive_word08(
self,
rule_id="R0079",
description="The word ""But"" implies a conflict. It will subconsciously affect your recipient. To make sure you keep it positive use a stronger, more confident word if possible. ",
info="Try to be very precise on what you want and use the present form if possible.",
example_pos="And",
example_neg="But"
) -> None:
""" Check for negative and positive tone.

    rule_id (str): The ID of the rule.
    description (str): The description of the rule.
    info (str): Additional info about the rule.
    example_pos (str): Positive example for this rule.
    example_neg (str): Negative example for this rule.
    """
    # Define matcher
    matcher = Matcher(self.nlp.vocab)

    # Define patterns to match
    pattern1 = [
        {"LOWER": "But"}
    ]


    # Add the patterns to the matcher and apply the matcher to the doc
    matcher.add("positivetone_pattern", [pattern1])
    matches = matcher(self.doc)

    # Merge the matched intervals
    intervals = [(start, end) for match_id, start, end in matches]
    merged = merge_intervals(intervals)

    # Add recommendations
    if len(merged) > 0:
        self._add_recommendation(
            rule_id=rule_id,
            description=description,
            info=info,
            example_pos=example_pos,
            example_neg=example_neg,
            token_intervals=merged
        )` @

Pass list as input to action

The user must be able to pass a list for the following input values:

  • model_name
  • model_version
    This is required to deploy multiple models to one container.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.