Coder Social home page Coder Social logo

ai-predictivemaintenance's Introduction

Predictive Maintenance with AI

Deploy to Azure

This open-source solution template showcases a complete Azure infrastructure capable of supporting Predictive Maintenance scenarios in the context of IoT remote monitoring. This repo provides reusable and customizable building blocks to enable Azure customers to solve Predictive Maintenance problems using Azure's cloud AI services.

Main features

Requirements

You will need an Azure subscription to get started.

Deploying the solution will create a resource group in your subscription and populate it with the following resources:

Reporting Issues and Feedback

Issues

If you discover any bugs, please file an issue here, making sure to fill out the provided template with the appropriate information.

Feedback

To share your feedback, ideas or feature requests, please contact [email protected].

Learn More


This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

ai-predictivemaintenance's People

Contributors

jadesai avatar jqhuangonearth avatar laramume avatar microsoft-github-policy-service[bot] avatar microsoftopensource avatar miwelsh avatar msftgits avatar naveenvig avatar ramkumarkrishnan avatar t-prshor avatar tjacobhi avatar wdecay avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ai-predictivemaintenance's Issues

Data Generator HTTP Error

When I run the data generator, i see following HTTP response errors.

Any idea what's going wrong here ?

xxxxxxx@yyvm:~/cb/AI-PredictiveMaintenance/src$ python3 WebApp/App_Data/jobs/continuous/Simulator/simulator.py
/usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: to-Python converter for IoTHubMapError already registered; second conversion method ignored.
return f(*args, **kwds)
/usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: to-Python converter for IoTHubMessageError already registered; second conversion method ignored.
return f(*args, **kwds)
/usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: to-Python converter for MAP_RESULT_TAG already registered; second conversion method ignored.
return f(*args, **kwds)
/usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: to-Python converter for IOTHUB_MESSAGE_RESULT_TAG already registered; second conversion method ignored.
return f(*args, **kwds)
/usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: to-Python converter for IOTHUBMESSAGE_DISPOSITION_RESULT_TAG already registered; second conversion method ignored.
return f(*args, **kwds)
/usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: to-Python converter for IOTHUBMESSAGE_CONTENT_TYPE_TAG already registered; second conversion method ignored.
return f(*args, **kwds)
/usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: to-Python converter for IoTHubMap already registered; second conversion method ignored.
return f(*args, **kwds)
/usr/lib/python3.5/importlib/_bootstrap.py:222: RuntimeWarning: to-Python converter for IoTHubMessage already registered; second conversion method ignored.
return f(*args, **kwds)
Error: Time:Fri Nov 9 06:43:56 2018 File:/usr/sdk/src/c/c-utility/adapters/httpapi_curl.c Func:HTTPAPI_ExecuteRequest Line:624 Failure in HTTP communication: server reply code is 429
Info: HTTP Response:{"Message":"ErrorCode:ThrottlingBacklogTimeout;The request has been throttled. Wait 10 seconds and try again. Operation type: CRUD","ExceptionMessage":"Tracking ID:2369eb7fa0f74c89b34a0107bd6252b2-G:6-TimeStamp:11/09/2018 06:43:56"}
Error: Time:Fri Nov 9 06:43:56 2018 File:/usr/sdk/src/c/iothub_service_client/src/iothub_registrymanager.c Func:sendHttpRequestCRUD Line:1004 Http Failure status code 429.
Error: Time:Fri Nov 9 06:43:57 2018 File:/usr/sdk/src/c/c-utility/adapters/httpapi_curl.c Func:HTTPAPI_ExecuteRequest Line:624 Failure in HTTP communication: server reply code is 429
Info: HTTP Response:{"Message":"ErrorCode:ThrottlingBacklogTimeout;The request has been throttled. Wait 10 seconds and try again. Operation type: CRUD","ExceptionMessage":"Tracking ID:2d75e0b482914ad49654bd07c4e5d86b-G:6-TimeStamp:11/09/2018 06:43:57"}
Error: Time:Fri Nov 9 06:43:57 2018 File:/usr/sdk/src/c/iothub_service_client/src/iothub_registrymanager.c Func:sendHttpRequestCRUD Line:1004 Http Failure status code 429.
Error: Time:Fri Nov 9 06:44:56 2018 File:/usr/sdk/src/c/c-utility/adapters/httpapi_curl.c Func:HTTPAPI_ExecuteRequest Line:624 Failure in HTTP communication: server reply code is 429
Info: HTTP Response:{"Message":"ErrorCode:ThrottlingBacklogTimeout;The request has been throttled. Wait 10 seconds and try again. Operation type: CRUD","ExceptionMessage":"Tracking ID:8f6206def6dd476bb55416bec191a7fd-G:2-TimeStamp:11/09/2018 06:44:56"}
Error: Time:Fri Nov 9 06:44:56 2018 File:/usr/sdk/src/c/iothub_service_client/src/iothub_registrymanager.c Func:sendHttpRequestCRUD Line:1004 Http Failure status code 429.
Error: Time:Fri Nov 9 06:45:32 2018 File:/usr/sdk/src/c/c-utility/adapters/httpapi_curl.c Func:HTTPAPI_ExecuteRequest Line:624 Failure in HTTP communication: server reply code is 429
Info: HTTP Response:{"Message":"ErrorCode:ThrottlingBacklogTimeout;The request has been throttled. Wait 10 seconds and try again. Operation type: CRUD","ExceptionMessage":"Tracking ID:9ff6340532394064843cc103f45cdb6d-G:4-TimeStamp:11/09/2018 06:45:32"}
Error: Time:Fri Nov 9 06:45:32 2018 File:/usr/sdk/src/c/iothub_service_client/src/iothub_registrymanager.c Func:sendHttpRequestCRUD Line:1004 Http Failure status code 429.
Error: Time:Fri Nov 9 06:45:36 2018 File:/usr/sdk/src/c/c-utility/adapters/httpapi_curl.c Func:HTTPAPI_ExecuteRequest Line:624 Failure in HTTP communication: server reply code is 429
Info: HTTP Response:{"Message":"ErrorCode:ThrottlingBacklogTimeout;The request has been throttled. Wait 10 seconds and try again. Operation type: CRUD","ExceptionMessage":"Tracking ID:2ee21f96c78843129b06fa3a538ffe9c-G:4-TimeStamp:11/09/2018 06:45:36"}
Error: Time:Fri Nov 9 06:45:36 2018 File:/usr/sdk/src/c/iothub_service_client/src/iothub_registrymanager.c Func:sendHttpRequestCRUD Line:1004 Http Failure status code 429.
Error: Time:Fri Nov 9 06:46:32 2018 File:/usr/sdk/src/c/c-utility/adapters/httpapi_curl.c Func:HTTPAPI_ExecuteRequest Line:624 Failure in HTTP communication: server reply code is 429
Info: HTTP Response:{"Message":"ErrorCode:ThrottlingBacklogTimeout;The request has been throttled. Wait 10 seconds and try again. Operation type: CRUD","ExceptionMessage":"Tracking ID:912af35f2cd640049cbe58ec9a243a2e-G:4-TimeStamp:11/09/2018 06:46:32"}
Error: Time:Fri Nov 9 06:46:32 2018 File:/usr/sdk/src/c/iothub_service_client/src/iothub_registrymanager.c Func:sendHttpRequestCRUD Line:1004 Http Failure status code 429.

Issue accessing Linux DSVM

I was able to deploy all the resources successfully but when I get into the App service and try to access the Linux DSVM by clicking on the Jupyter Notebook link, I type the user and pass, click Sign In and then I get the following error:

image

I also noticed that 2 of the Webjobs are always in status Pending Restart

image

If I try to start them manually it won’t work and I get this on the logs:

[10/15/2020 14:36:33 > 437cbc: SYS INFO] WebJob singleton lock is released
[10/15/2020 14:36:33 > 437cbc: SYS INFO] WebJob singleton lock is acquired
[10/15/2020 14:36:33 > 437cbc: SYS INFO] Run script 'run.cmd' with script host - 'WindowsScriptHost'
[10/15/2020 14:36:33 > 437cbc: SYS INFO] Status changed to Running
[10/15/2020 14:36:33 > 437cbc: INFO] 
[10/15/2020 14:36:33 > 437cbc: INFO] D:\local\Temp\jobs\continuous\PythonAndStorageSetup\v5pntngq.qlw>set READY_FILE=D:\home\site\READY 
[10/15/2020 14:36:33 > 437cbc: INFO] 
[10/15/2020 14:36:33 > 437cbc: INFO] D:\local\Temp\jobs\continuous\PythonAndStorageSetup\v5pntngq.qlw>set PYTHON_DIR=D:\home\python364x64\python.exe 
[10/15/2020 14:36:33 > 437cbc: INFO] 
[10/15/2020 14:36:33 > 437cbc: INFO] D:\local\Temp\jobs\continuous\PythonAndStorageSetup\v5pntngq.qlw>IF NOT EXIST D:\home\python364x64\python.exe EXIT
[10/15/2020 14:36:33 > 437cbc: INFO] 
[10/15/2020 14:36:33 > 437cbc: INFO] D:\local\Temp\jobs\continuous\PythonAndStorageSetup\v5pntngq.qlw>IF EXIST D:\home\site\READY EXIT
[10/15/2020 14:36:33 > 437cbc: SYS INFO] Status changed to Success
[10/15/2020 14:36:33 > 437cbc: SYS INFO] Process went down, waiting for 60 seconds
[10/15/2020 14:36:33 > 437cbc: SYS INFO] Status changed to PendingRestart

What am I doing wrong?

Thank you in advance.

Regards

How this AI-Predictive Maintenance workflow is working ?

Hi,

Is there a documentation as to how this predictive-maintenance is working ? I mean who starts the device generator ? how the data is flowing into ABS ? and then who is starting the spark to read the data from ABS ? what processing is being done ? where and how the data is stored etc ?

The documentation in the github indicates the overall component interaction and the flow but it doesn't tell how things start off ? where is the data moving ? how it is synchronized ? where is the data generator and offline maintenance events being used ?

If I have to recreate this workflow manually from github src would it be possible ?

Regards,
/Girish BK

Cant build data-generator !!!

Hi,

I'm trying to use the predictive maintenance data generator "https://github.com/Azure/AI-PredictiveMaintenance/tree/master/src/WebApp/App_Data/jobs/continuous/Simulator"
as a module in Azure IoT edge. https://github.com/Azure/iotedge

In the current form, data generator creates Azure IoT hub devices and then creates the simulated devices and starts sending the data to IoT hub. What I would want to do is to remove the IoThub interface from the data generator and just use the simulated devices to create the messages and publish the same to IoT edge Runtime broker.

I tried to remove the IoT hub interfaces to build the module but am faced with some basic import issues like the one below ...


xxxx@VVVvm:/cb/AI-PredictiveMaintenance/src$ python WebApp/App_Data/jobs/continuous/Simulator/simulator.py
Traceback (most recent call last):
File "WebApp/App_Data/jobs/continuous/Simulator/simulator.py", line 16, in
from devices import SimulatorFactory
File "/home/netiotroot/cb/AI-PredictiveMaintenance/src/WebApp/App_Data/jobs/continuous/Simulator/devices/init.py", line 1, in
from devices.simulated_device import SimulatorFactory
File "/home/netiotroot/cb/AI-PredictiveMaintenance/src/WebApp/App_Data/jobs/continuous/Simulator/devices/simulated_device.py", line 3, in
from abc import ABC, abstractmethod
ImportError: cannot import name ABC
xxxx@VVVvm:
/cb/AI-PredictiveMaintenance/src$


Questions:
1.0) Can we have the instructions on how to build the WebApp jobs on Linux systems ?
2.0) Would it be possible to list down the steps, by the authors of predictive maintenance solution, on what it takes to convert the data generator as a IoT edge module which doesn't send messages to IoT hub but rather to the local pub/sub broker.

The predictive maintenance data generator is a very good use case to work on end to end flow and customize it for our needs.

Hope to here on this soon

Regards,
/Girish BK

Deployment failed - Databrick's featurization_task

I am unable to deploy my solution. The deployment fails on the "Resource provisioning" step with the following output:

Creating Databricks cluster and starting featurization streaming job | Failed |
Traceback (most recent call last): File "run.py", line 117, in

VM Provisioning - ScriptExtension failed

The final step of deployment "Provisioning Linux DSVM" failed with the following error below. I was able to get the deployment to complete by uninstalling the ScriptExtension from the VM and then clicking "retry" on the deployment page.

CIQS Error:
The resource operation completed with terminal provisioning state 'Failed'. (Code: ResourceDeploymentFailure, ResourceType: Microsoft.Compute/virtualMachines/extensions,ResourceName: pmmm32mckpuafww/pmmm32mckpuafww) - VM has reported a failure when processing extension 'pmmm32mckpuafww'. Error message: "Malformed status file [ExtensionError] Invalid status/status: failed". (Code: VMExtensionProvisioningError)

Error in Azure Portal:
TypeMicrosoft.OSTCExtensions.CustomScriptForLinux
Version1.5.2.2
StatusProvisioning failed
Status levelError
Status messageMalformed status file [ExtensionError] Invalid status/status: failed
Handler statusReady
Handler status levelInfo

Azure Error: InvalidTemplate

Hi,

I see this error while deploying the solution via pdm-arm.json ARM template

girishkb-mac-0:ARMTemplates girishkb$ az group deployment create -n example2-deployment -g example2 --template-file pdm-arm.json
Please provide string value for 'databricksWorkspaceUrl' (? for help): https://eastus2.azuredatabricks.net
Please provide string value for 'databricksToken' (? for help): dapif7xxxxxxxxxxxxxxxxxxxxxx4
Please provide string value for 'dsvmUsername' (? for help): xxxxxxxxxx
Please provide securestring value for 'dsvmPassword' (? for help):
Azure Error: InvalidTemplate
Message: Deployment template validation failed: 'The template resource 'linuxDsvmTemplate' at line '1' and column '7537' is not valid: Unable to parse language expression 'concat(variables('gitHubBaseUrl'),'src/CustomScripts/setup.sh ', concat(variables('gitHubBaseUrl'), '/binaries/Notebooks.zip ', concat(variables('gitHubBaseUrl'), '/binaries/spark-avro_2.11-4.0.0.jar')': expected token 'RightParenthesis' and actual 'EndOfData'.. Please see https://aka.ms/arm-template-expressions for usage details.'.

linuxDsvmTemplate deployment failed

Provision error:

{
  "code": "DeploymentFailed",
  "message": "At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/arm-debug for usage details.",
  "details": [
    {
      "code": "Conflict",
      "message": "{\r\n  \"status\": \"Failed\",\r\n  \"error\": {\r\n    \"code\": \"ResourceDeploymentFailure\",\r\n    \"message\": \"The resource operation completed with terminal provisioning state 'Failed'.\",\r\n    \"details\": [\r\n      {\r\n        \"code\": \"DeploymentFailed\",\r\n        \"message\": \"At least one resource deployment operation failed. Please list deployment operations for details. Please see https://aka.ms/arm-debug for usage details.\",\r\n        \"details\": [\r\n          {\r\n            \"code\": \"Conflict\",\r\n            \"message\": \"{\\r\\n  \\\"status\\\": \\\"Failed\\\",\\r\\n  \\\"error\\\": {\\r\\n    \\\"code\\\": \\\"ResourceDeploymentFailure\\\",\\r\\n    \\\"message\\\": \\\"The resource operation completed with terminal provisioning state 'Failed'.\\\",\\r\\n    \\\"details\\\": [\\r\\n      {\\r\\n        \\\"code\\\": \\\"VMExtensionProvisioningError\\\",\\r\\n        \\\"message\\\": \\\"VM has reported a failure when processing extension 'pmmgzcvfoydwmm2'. Error message: \\\\\\\"Launch command failed: [Errno 2] No such file or directory: '/var/lib/waagent/Microsoft.OSTCExtensions.CustomScriptForLinux-1.5.4/download/0/stdout'\\\\\\\".\\\"\\r\\n      }\\r\\n    ]\\r\\n  }\\r\\n}\"\r\n          }\r\n        ]\r\n      }\r\n    ]\r\n  }\r\n}"
    }
  ]
}

capture

Unable to create WorkSpace

Hi,
I was running your Operationalization notebook code.
After running this code:"
ws = Workspace.create(name = workspace_name, subscription_id = subscription_id,
resource_group = resource_group, location = workspace_region)

I was getting this exception:

WorkspaceException: Unable to create the workspace.
Deployment failed. Correlation ID: c32e0bbb-5e25-4a1b-a387-ca466e1ab111. {
"error": {
"code": "InvalidResourceType",
"message": "The resource type could not be found in the namespace 'Microsoft.MachineLearningServices' for api version '2018-03-01-preview'."
}
}
I have provided all the details regarding the azure subscription.
Can you help me out in this exception?
Thanks,
Nikhil

Model Registration Issue

Hi,
Can you help me out in this ?
In operationalisation.py notebook
This code:
model = Model.register(model_path = "model.pkl",
model_name = "model.pkl",
tags = ["pdm"],
description = "Predictive Maintenance multi-class classifier",
workspace = ws)

Giving this exception:
Exception: Received bad response from Model Management Service:
Response Code: 400
Headers: {'Date': 'Fri, 21 Sep 2018 10:32:51 GMT', 'Connection': 'keep-alive', 'api-supported-versions': '2018-03-01-preview', 'Content-Type': 'application/json', 'x-ms-client-session-id': '', 'Strict-Transport-Security': 'max-age=15724800; includeSubDomains; preload', 'x-ms-client-request-id': '79ccef073517408a915e1f260c9b3667', 'Transfer-Encoding': 'chunked'}
Content: b'{"code":"BadRequest","statusCode":400,"message":"The request is invalid","details":[{"code":"EmptyOrInvalidPayload","message":"The request payload was either empty or invalid. Try again with a well-formed payload."}]}'

Thanks,
Nikhil

import error

when deploying the web app with flask I get the follwing error :

  • Environment: production
    WARNING: This is a development server. Do not use it in a production deployment.
    Use a production WSGI server instead.
  • Debug mode: off
    Usage: flask run [OPTIONS]
    Try 'flask run --help' for help.

Error: While importing 'app', an ImportError was raised.

can someone know how to resolve this ?

Here is the import libs :

import numpy as np
import sys, os, time, glob
import requests
import json
import uuid
import json
import random
import markdown
import jwt
import io
import csv
import collections
from urllib.parse import urlparse
from datetime import datetime, timedelta
from functools import wraps
from flask import Flask, render_template, Response, request, redirect, url_for
from threading import Thread
from azure.storage.blob import BlockBlobService
from azure.storage.file import FileService
from azure.storage.file.models import FilePermissions
from azure.storage.blob.models import BlobPermissions
from azure.storage.table import TableService, Entity, TablePermissions
from flask_breadcrumbs import Breadcrumbs, register_breadcrumb
from iot_hub_helpers import IoTHub
from http import HTTPStatus

Error in Model Training....

Hi,

I get this error in the model training playbook.

Area Under the Curve (AUC)
AUC is the area under the receiver operating characteristic curve (ROC curve), which is 1.0 for ideal classifiers and 0.5 for those that do no better than random guessing. Let's compare the AUC score of the trained model with that of the dummy classifier.

roc_auc_score expects binarized labels

binarizer = LabelBinarizer()
binarizer.fit(Y_train_res)
Y_test_binarized = binarizer.transform(Y_test)

def auc_score(y_true, y_pred):
return roc_auc_score(binarizer.transform(y_true), binarizer.transform(y_pred), average='macro')

print('ROC AUC scores')
print('Trained model: {0}\nDummy classifier: {1}'.format(auc_score(Y_test, Y_predictions),
auc_score(Y_test, Y_dummy)))
ROC AUC scores

ValueError Traceback (most recent call last)
in ()
8
9 print('ROC AUC scores')
---> 10 print('Trained model: {0}\nDummy classifier: {1}'.format(auc_score(Y_test, Y_predictions),
11 auc_score(Y_test, Y_dummy)))

in auc_score(y_true, y_pred)
5
6 def auc_score(y_true, y_pred):
----> 7 return roc_auc_score(binarizer.transform(y_true), binarizer.transform(y_pred), average='macro')
8
9 print('ROC AUC scores')

/anaconda/envs/py35/lib/python3.5/site-packages/sklearn/metrics/ranking.py in roc_auc_score(y_true, y_score, average, sample_weight)
275 return _average_binary_score(
276 _binary_roc_auc_score, y_true, y_score, average,
--> 277 sample_weight=sample_weight)
278
279

/anaconda/envs/py35/lib/python3.5/site-packages/sklearn/metrics/base.py in _average_binary_score(binary_metric, y_true, y_score, average, sample_weight)
116 y_score_c = y_score.take([c], axis=not_average_axis).ravel()
117 score[c] = binary_metric(y_true_c, y_score_c,
--> 118 sample_weight=score_weight)
119
120 # Average the results

/anaconda/envs/py35/lib/python3.5/site-packages/sklearn/metrics/ranking.py in _binary_roc_auc_score(y_true, y_score, sample_weight)
266 def _binary_roc_auc_score(y_true, y_score, sample_weight=None):
267 if len(np.unique(y_true)) != 2:
--> 268 raise ValueError("Only one class present in y_true. ROC AUC score "
269 "is not defined in that case.")
270

ValueError: Only one class present in y_true. ROC AUC score is not defined in that case.

ROC AUC score would be good candidate when a single sensitive model evaluation measure is needed.

Any idea what's going wrong here ?

Regards,
/Girish BK

During deployment, databricks URL is incorrect

This issue occurred on deployment page "Databricks authorization". The link "here" of text "If a new Databricks workspace was created by the solution, you can access it here." is broken. The URL started with "https://https" (https://https//westus2.azuredatabricks.net/aad/auth?has=&Workspace=/subscriptions/...). Once I removed the second "https//" the link worked as intended.

Additional notes: Please cleanup the text to inform the user they must first click the "here" before following the instructions. If the user doesn't click "here" first, they cannot log-in to databricks as the AD account has been properly setup yet.

Error when creating simulated devices

I am unable to get the web app functioning properly. Both the device setup Webjob (DatabricksAndSimulatedDevicesSetup) and the web app backend API throw an error when creating new devices. The problem lies in the update_twin method in the iot_hub_helpers.py module. The IoTHub API call returns a response code 400 (bad request) with very a unhelpful error message:

{"Message":"ErrorCode:GenericBadRequest;BadRequest","ExceptionMessage":"Tracking ID:365b1586b0f64c61adda7ad80147c06a-G:6-TimeStamp:08/21/2019 09:37:24"}

Here is an example of the request content:

{"tags": {"simulated": true, "simulator": "devices.engines.Engine", "h1": 0.8996036339025263, "h2": 0.8212598701560975}, "desiredProperties": {}}

The devices are actually created in the IoT Hub and they are visible in the UI, but their status is always "Disconnected" and they don't produce any data.

Data generator error from device: TableNotFound

File "D:\home\python364x64\lib\site-packages\azure\storage\table\tableservice.py", line 1096, in _perform_request
return super(TableService, self)._perform_request(request, parser, parser_args, operation_context)
File "D:\home\python364x64\lib\site-packages\azure\storage\storageclient.py", line 280, in _perform_request
raise ex
File "D:\home\python364x64\lib\site-packages\azure\storage\storageclient.py", line 248, in _perform_request
raise ex
File "D:\home\python364x64\lib\site-packages\azure\storage\storageclient.py", line 235, in _perform_request
_http_error_handler(HTTPError(response.status, response.message, response.headers, response.body))
File "D:\home\python364x64\lib\site-packages\azure\storage_error.py", line 114, in _http_error_handler
raise AzureHttpError(message, http_error.status)
azure.common.AzureMissingResourceHttpError: Not Found
{"odata.error":{"code":"TableNotFound","message":{"lang":"en-US","value":"The table specified does not exist.\nRequestId:87821d3e-a002-0030-37d4-0f46c2000000\nTime:2019-05-21T12:52:29.5613381Z"}}}


This resulted as soon as the deployment finished successfully - not errors at any of the steps.
The demo did not work, we saw the webapp but the simulated devices were not producing any data and therefore the ML part did not get a chance of producing results.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.