Coder Social home page Coder Social logo

azure-samples / azure-intelligent-edge-patterns Goto Github PK

View Code? Open in Web Editor NEW
111.0 39.0 141.0 1.87 GB

Samples for Intelligent Edge Patterns

License: MIT License

C# 6.14% HTML 0.28% JavaScript 26.96% CSS 3.06% Dockerfile 0.06% Python 26.86% PowerShell 24.67% Shell 0.88% Jupyter Notebook 5.08% Batchfile 0.07% C++ 1.55% C 0.16% HLSL 0.04% TSQL 0.04% Visual Basic .NET 0.05% TypeScript 3.87% HCL 0.05% SCSS 0.08% Makefile 0.04% Smarty 0.06%

azure-intelligent-edge-patterns's Introduction

Notice

Thank you for your interesed in the samples contained here! This repository is being archived, and not being actively mantained! If you are looking for project Vision on Edge (VoE), please refer to project KubeAI Application Nucleus for edge - KAN instead. For more details please refer to this Tech Community blog.

Azure Intelligent Edge Patterns

These samples demonstrate how to quickly get started developing for the Azure Intelligent Edge, using the Azure Stack Edge, Azure Stack HCI and, Azure Stack Hub. Each sample is self-contained and may require extra hardware.

Resources

azure-intelligent-edge-patterns's People

Contributors

andykao1213 avatar anishekkamal avatar anjayajodha avatar borisneal avatar c-bowers-neal avatar chenpeirupenny avatar dependabot[bot] avatar ewebster-fractal avatar garvitar avatar goatwu1993 avatar hughku avatar initmahesh avatar kaka-lin avatar lancehsu avatar lisongshan007 avatar michaeltse1 avatar microsoftopensource avatar neilbird avatar panchul avatar penorouzi avatar qscgyujm avatar reenadk avatar rlfmendes avatar ronpai avatar rtibi avatar sijuman avatar tommywu052 avatar waitingkuo avatar wendylee20 avatar ylolinker avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

azure-intelligent-edge-patterns's Issues

CSI-2 based cameras

Please provide us with the following information:

This issue is for a: (mark with an x)

- [ ] bug report -> please search issues before submitting
- [x ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

Attach a CSI-2 based cameras to the solution.

Any log messages given by the failure

Expected/desired behavior

OS and Version?

Ubuntu 18.04

Versions

Mention any other details that might be useful


Thanks! We'll be in touch soon.

factory-ai-vision: InferenceModule failing with libGL.so.1: No such file or directory with "deployment.cpu.template.json"

Please provide us with the following information:

This issue is for a: (mark with an x)

- [X ] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

After installing all Azure pre-requisites for running factory-ai-vision. Started following (manual deployment steps):
From: https://github.com/Azure-Samples/azure-intelligent-edge-patterns/tree/master/factory-ai-vision
Step 1: "Option 2: Manual installation building a docker container and deploy by Visual Studio Code"
Filled all the fields in :
env-template (FYI: I have to rename this to .env before building the container image in VSCODE)

Step2: Under EdgeSolutions I used "deployment.cpu.template.json" to build and push container image

Any log messages given by the failure

InferenceModule failed "backoff"

logss

Expected/desired behavior

InferenceModule Edge module should start without backoff

OS and Version?

Windows 7, 8 or 10. Linux (which distribution). macOS (Yosemite? El Capitan? Sierra?)
Ubuntu 16.04
CPU: Intel Core i7

Versions

Mention any other details that might be useful


Thanks! We'll be in touch soon.

Factory AI Vision Jetson Templates Have Merge Markers In

Please provide us with the following information:

This issue is for a: (mark with an x)

- [ ] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [X] regression (a behavior that used to work and stopped in a new release)

Both https://github.com/Azure-Samples/azure-intelligent-edge-patterns/blob/master/factory-ai-vision/EdgeSolution/deployment.jetson.template.json and https://github.com/Azure-Samples/azure-intelligent-edge-patterns/blob/master/factory-ai-vision/EdgeSolution/deployment.jetson.opencv.template.json have merge conflict markers in on the master branch.

I would offer a PR to resolve this - however the kakalin/ovms-server docker image the conflict references has no container for the ARMv8 architecture in use by the Jetson, so if this change was merged successfully, the deployment would still fail. Therefore I'm not sure what the desired resolution is.

Minimal steps to reproduce

Attempt a deployment using the deployment.jetson.template.json or deployment.jetson.opencv.template.json template

Any log messages given by the failure

  • Leaving deployment as it is in the repository complains of invalid deployment .json
  • Fixing the merge conflict attempts to use an amd64 container on Jetson causing manifest for kakalin/ovms-app:latest not found: manifest unknown: manifest unknown

Expected/desired behavior

Deployment to the Jetson works

OS and Version?

Tegra 4 Linux (Ubuntu 18.04) on Jetson NX

Need more detail in instructions

@ewebster-neal I am trying to follow the instructions in running the FHIR Server on K8 on Azure Stack Hub.

In particular, I need more information on how to establish the credentials and the meaning of some of the instructions in the Environmental Definition parameters:

  • FHIR_VERSION: this is clear.
  • FHIRServer__Security__Authentication__Audience: I am not sure how to specify the audience. The service doesn't contain an audience value. Is this the application ID, object ID, or other value?
  • FHIRServer__Security__Authentication__Authority: This value is also obscure. Is this the secret/key for the SPN?
  • SAPASSWORD: How do I specify the SQL DB connection string? Is this the connection string?
  • ApplicationInsights__InstrumentationKey: Is this required?

In addition, can you point me to recommended instructions or how I might get the cert for Kestrel?

Please include release notes with each new release

Please provide us with the following information:

This issue is for a: (mark with an x)

- [ ] bug report -> please search issues before submitting
- [ ] feature request
- [x] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

Please include a couple of bullets with each new release clarifying what's in it. There's usually only a short note that doesn't really help understand if I should update or not.

Any log messages given by the failure

Expected/desired behavior

OS and Version?

Windows 7, 8 or 10. Linux (which distribution). macOS (Yosemite? El Capitan? Sierra?)

Versions

Mention any other details that might be useful


Thanks! We'll be in touch soon.

Retrain can only do once ,After that. images will not be captured for retrain.

Please provide us with the following information:

This issue is for a: (mark with an x)

- [ x] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

Retrain can only do once ( It will work at first time to identify manually). After that. images will not be captured for retrain.

Any log messages given by the failure

Errors at InferenceModule------
ERROR:main:Exception on /update_model [GET]
Traceback (most recent call last):
File "/opt/miniconda/lib/python3.6/urllib/request.py", line 1318, in do_open
encode_chunked=req.has_header('Transfer-encoding'))
File "/opt/miniconda/lib/python3.6/http/client.py", line 1254, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/opt/miniconda/lib/python3.6/http/client.py", line 1300, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/opt/miniconda/lib/python3.6/http/client.py", line 1249, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/opt/miniconda/lib/python3.6/http/client.py", line 1036, in _send_output
self.send(msg)
File "/opt/miniconda/lib/python3.6/http/client.py", line 974, in send
self.connect()
File "/opt/miniconda/lib/python3.6/http/client.py", line 1407, in connect
super().connect()
File "/opt/miniconda/lib/python3.6/http/client.py", line 946, in connect
(self.host,self.port), self.timeout, self.source_address)
File "/opt/miniconda/lib/python3.6/socket.py", line 704, in create_connection
for res in getaddrinfo(host, port, 0, SOCK_STREAM):
File "/opt/miniconda/lib/python3.6/socket.py", line 745, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno -3] Temporary failure in name resolution

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/opt/miniconda/lib/python3.6/site-packages/flask/app.py", line 2447, in wsgi_app
response = self.full_dispatch_request()
File "/opt/miniconda/lib/python3.6/site-packages/flask/app.py", line 1952, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/opt/miniconda/lib/python3.6/site-packages/flask/app.py", line 1821, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/opt/miniconda/lib/python3.6/site-packages/flask/_compat.py", line 39, in reraise
raise value
File "/opt/miniconda/lib/python3.6/site-packages/flask/app.py", line 1950, in full_dispatch_request
rv = self.dispatch_request()
File "/opt/miniconda/lib/python3.6/site-packages/flask/app.py", line 1936, in dispatch_request
return self.view_functionsrule.endpoint
File "main.py", line 429, in update_model
get_file_zip(model_uri, MODEL_DIR)
File "/app/utility.py", line 87, in get_file_zip
remotefile = urlopen(url)
File "/opt/miniconda/lib/python3.6/urllib/request.py", line 223, in urlopen
return opener.open(url, data, timeout)
File "/opt/miniconda/lib/python3.6/urllib/request.py", line 526, in open
response = self._open(req, data)
File "/opt/miniconda/lib/python3.6/urllib/request.py", line 544, in _open
'_open', req)
File "/opt/miniconda/lib/python3.6/urllib/request.py", line 504, in _call_chain
result = func(*args)
File "/opt/miniconda/lib/python3.6/urllib/request.py", line 1361, in https_open
context=self._context, check_hostname=self._check_hostname)
File "/opt/miniconda/lib/python3.6/urllib/request.py", line 1320, in do_open
raise URLError(err)
urllib.error.URLError: <urlopen error [Errno -3] Temporary failure in name resolution>
INFO:werkzeug:172.18.0.3 - - [13/Jul/2020 06:11:37] "GET /update_model?model_uri=https%3A%2F%2Firisscuprodstore.blob.core.windows.net%2Fm-2ab61ffd8f654cbb9fd769c94433de56%2Ff53171adc8b04247937fcb00abc5f255.ONNX.zip%3Fsv%3D2017-04-17%26sr%3Db%26sig%3DP7tiBoN5bhkyp03mf0SyAwJZ869QaO4Z5p4THpu3QEk%253D%26se%3D2020-07-14T06%253A02%253A52Z%26sp%3Dr HTTP/1.1" 500 -
INFO:werkzeug:172.18.0.3 -

image

Expected/desired behavior

Should be able to capture images always for retrain if images accuracy is set between min and max

OS and Version?

Windows 7, 8 or 10. Linux (which distribution). macOS (Yosemite? El Capitan? Sierra?)
Ubuntu

Versions

Mention any other details that might be useful


Thanks! We'll be in touch soon.

Abilitity to use static files both recorded video and static images

Please provide us with the following information:

This issue is for a: (mark with an x)

- [ ] bug report -> please search issues before submitting
- [x ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

Where customers have static video files that are recorded on additional equipment, it is hard to convert these files over to a stream. If we could inject this video with out having to convert it over to RTSP stream and just replay the video for inferencing it would help out with these types of deployments. Customer are looking to play back video files (AVI, MKV, MPEG4) as well as static images (JPG, PNG, etc)

Any log messages given by the failure

Expected/desired behavior

be able to replay recorded video and static image files for inferencing.

OS and Version?

Windows 7, 8 or 10. Linux (which distribution). macOS (Yosemite? El Capitan? Sierra?)

Versions

Mention any other details that might be useful


Thanks! We'll be in touch soon.

Missing JSON file as per Intructions https://github.com/Azure-Samples/azure-intelligent-edge-patterns/blob/master/edge-ai-void-detection/azure-stack.md#deploy-to-azure-stack

  1. Select the deployment.iotedgevm.amd64.json file in the config folder and then click Select Edge Deployment Manifest. Do not use the deployment.iotedgevm.template.json file.

Please provide us with the following information:

This issue is for a: (mark with an x)

- [ ] bug report -> please search issues before submitting
- [ ] feature request
- [x ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

The deployment.iotedgevm.amd64.json file doesn't exist in this repository

UI Feature Request: for Factory Vision AI end-to-end tool being built by Mahesh Yadav

Please provide us with the following information:

This issue is for a: (mark with an Factory Vision AI end-to-end tool being built by Mahesh Yadav)

  • bug report -> please search issues before submitting
  • feature request
  • documentation issue or request
  • regression (a behavior that used to work and stopped in a new release)
### Mention any other details that might be useful
Given the value proposition is end-to-end no-code/minimal-code experience, maybe we can add an ML end-to-end pipeline and anchor users to a certain phase within the ML pipeline. This anchoring to different ML stages will also improve the users ML understanding - leading to more ML use case prototyping. You are democratizing AI..

This will be very helpful for industrial customers who are not as conversant on ML techniques.

> ---------------------------------------------------------------
> Thanks! We'll be in touch soon.

Multi-label Tagging of Images in the UI

Please provide us with the following information:

This issue is for a: (mark with an x)

- [x] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

  1. Capture pictures from video feed
  2. Tag the images
  3. You are not going to be able to label more than one tag per image

Any log messages given by the failure

No log messages

Expected/desired behavior

You should be able to have more than one tag per image for training purposes.

OS and Version?

Windows 7, 8 or 10. Linux (which distribution). macOS (Yosemite? El Capitan? Sierra?)

OS: Windows 10, Browser: Microsoft Edge Version 87.0.664.60 (Official build) (64-bit)

Versions

Mention any other details that might be useful


Thanks! We'll be in touch soon.

on start if we do ntot choose any project and start the demo model with demo video the module fails

How to repro
Start a new deployment with no camera and CV keys only

  • Do not select any project on settings page
  • Go to demo and try running the demo

Expeceted
Demos should work

Actual
Demo get stuck and we get this error in web module log

TTP GET /api/projects/2/export 200 [0.03, 73.59.105.21:60681]
[22-Jul-2020 22:44:24] INFO django.channels.server : HTTP GET /api/projects/2/export 200 [0.03, 73.59.105.21:60681]
[22-Jul-2020 22:44:27] INFO vision_on_edge.azure_training.api.views : exporting project. Project Id: {6}
Internal Server Error: /api/projects/6/export
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/django/core/handlers/exception.py", line 34, in inner
response = get_response(request)
File "/usr/local/lib/python3.8/site-packages/django/core/handlers/base.py", line 115, in _get_response
response = self.process_exception_by_middleware(e, request)
File "/usr/local/lib/python3.8/site-packages/django/core/handlers/base.py", line 113, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/local/lib/python3.8/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/django/views/generic/base.py", line 71, in view
return self.dispatch(request, *args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/rest_framework/views.py", line 505, in dispatch
response = self.handle_exception(exc)
File "/usr/local/lib/python3.8/site-packages/rest_framework/views.py", line 465, in handle_exception
self.raise_uncaught_exception(exc)
File "/usr/local/lib/python3.8/site-packages/rest_framework/views.py", line 476, in raise_uncaught_exception
raise exc
File "/usr/local/lib/python3.8/site-packages/rest_framework/views.py", line 502, in dispatch
response = handler(request, *args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/rest_framework/decorators.py", line 50, in handler
return func(*args, **kwargs)
File "/app/vision_on_edge/azure_training/api/views.py", line 111, in export
project_obj = Project.objects.get(pk=project_id)
File "/usr/local/lib/python3.8/site-packages/django/db/models/manager.py", line 82, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/django/db/models/query.py", line 415, in get
raise self.model.DoesNotExist(
vision_on_edge.azure_training.models.Project.DoesNotExist: Project matching query does not exist.

And this in the Inference sample module

raceback (most recent call last):
File "/opt/miniconda/lib/python3.6/site-packages/azure/iot/device/common/pipeline/pipeline_stages_mqtt.py", line 115, in _run_op
self.transport.connect(password=self.sas_token)
File "/opt/miniconda/lib/python3.6/site-packages/azure/iot/device/common/mqtt_transport.py", line 345, in connect
raise exceptions.ConnectionFailedError(cause=e)
azure.iot.device.common.transport_exceptions.ConnectionFailedError: None caused by [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852)

ERROR:azure.iot.device.common.pipeline.pipeline_ops_base:ConnectOperation: completing with error None caused by [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852)
WARNING:azure.iot.device.common.pipeline.pipeline_stages_base:RetryStage(ConnectOperation): Op needs retry with interval 20 because of None caused by [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852). Setting timer.
INFO:werkzeug:172.18.0.2 - - [22/Jul/2020 22:48:10] "GET /metrics HTTP/1.1" 200 -
INFO:werkzeug:172.18.0.2 - - [22/Jul/2020 22:48:15] "GET /metrics HTTP/1.1" 200 -
INFO:werkzeug:172.18.0.2 - - [22/Jul/2020 22:48:20] "GET /metrics HTTP/1.1" 200 -
INFO:werkzeug:172.18.0.2 - - [22/Jul/2020 22:48:25] "GET /metrics HTTP/1.1" 200 -
INFO:azure.iot.device.common.pipeline.pipeline_stages_base:RetryStage(ConnectOperation): retrying
INFO:azure.iot.device.common.pipeline.pipeline_stages_mqtt:MQTTTransportStage(ConnectOperation): connecting
INFO:azure.iot.device.common.mqtt_transport:connecting to mqtt broker
INFO:azure.iot.device.common.mqtt_transport:Connect using port 8883 (TCP)
ERROR:azure.iot.device.common.pipeline.pipeline_stages_mqtt:transport.connect raised error
ERROR:azure.iot.device.common.pipeline.pipeline_stages_mqtt:Traceback (most recent call last):
File "/opt/miniconda/lib/python3.6/site-packages/azure/iot/device/common/mqtt_transport.py", line 340, in connect
host=self._hostname, port=8883, keepalive=DEFAULT_KEEPALIVE
File "/opt/miniconda/lib/python3.6/site-packages/paho/mqtt/client.py", line 937, in connect
return self.reconnect()
File "/opt/miniconda/lib/python3.6/site-packages/paho/mqtt/client.py", line 1100, in reconnect
sock.do_handshake()
File "/opt/miniconda/lib/python3.6/ssl.py", line 1077, in do_handshake
self._sslobj.do_handshake()
File "/opt/miniconda/lib/python3.6/ssl.py", line 689, in do_handshake
self._sslobj.do_handshake()
ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/opt/miniconda/lib/python3.6/site-packages/azure/iot/device/common/pipeline/pipeline_stages_mqtt.py", line 115, in _run_op
self.transport.connect(password=self.sas_token)
File "/opt/miniconda/lib/python3.6/site-packages/azure/iot/device/common/mqtt_transport.py", line 345, in connect
raise exceptions.ConnectionFailedError(cause=e)
azure.iot.device.common.transport_exceptions.ConnectionFailedError: None caused by [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852)

ERROR:azure.iot.device.common.pipeline.pipeline_ops_base:ConnectOperation: completing with error None caused by [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852)
WARNING:azure.iot.device.common.pipeline.pipeline_stages_base:RetryStage(ConnectOperation): Op needs retry with interval 20 because of None caused by [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852). Setting timer.

Please provide us with the following information:

This issue is for a: (mark with an x)

- [ ] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

Any log messages given by the failure

Expected/desired behavior

OS and Version?

Windows 7, 8 or 10. Linux (which distribution). macOS (Yosemite? El Capitan? Sierra?)

Versions

Mention any other details that might be useful


Thanks! We'll be in touch soon.

cannot build arm sample from vs code

[Yesterday 11:17 PM] Mahesh Yadav

have you tried building using iot edge build template ...here https://github.com/Azure-Samples/azure-intelligent-edge-patterns/blob/yadavm_factoryai/factory-ai-vision/EdgeSolution/deployment.gpu.arm64v8.template.json

[Yesterday 11:18 PM] Mahesh Yadav

instruction on how to build docker contaioenr for iot edge is here

[Yesterday 11:18 PM] Mahesh Yadav

https://github.com/Azure-Samples/azure-intelligent-edge-patterns/tree/yadavm_factoryai/factory-ai-vision#option-2-manual-installation-building-a-docker-container-and-deploy-by-visual-studio-code

[11:25 AM] Sean Kelly

Yes, those do not work.
Following steps 1 & 2, I created this .env file.
I thinking there is a typo in step 3. It says to use the factory-ai-vision/EdgeSolution/deployment.gpu.template.json template to build both the GPU and CPU versions.
I assumed that building factory-ai-vision/EdgeSolution/deployment.gpu.template.json would be wrong, but just in case I built it and got the following errors.
I also tried building factory-ai-vision/EdgeSolution/deployment.gpu.arm64v8.template.json (though not mentioned in the build steps) and it failed with a different set of errors
It appears to have successfully built a visionwebmodule and webdbmodule though neither are mentioned architecture documentation.

ARM support

Please provide us with the following information:

This issue is for a: (mark with an x)

- [ ] bug report -> please search issues before submitting
- [x ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

With customers starting out in a vision solution most start on a smaller platform like the Jetson product lines. These devices all run on an ARM 64 bit architecture, however, once the solution has been proven to work, and they have executive buy in, a lot of these customers will then look to move to ASE for production. To ensure that we can meet these customer use cases for starting POC's we need ARM support.

Any log messages given by the failure

Expected/desired behavior

Support ARM 64

OS and Version?

Ubuntu 18.04

Versions

Mention any other details that might be useful


Thanks! We'll be in touch soon.

Creating AML Table Dataset on Datastore hosted on ASH using AML Workspace UI

Please provide us with the following information:

This issue is for a: (mark with an x)

- [ x ] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

  1. Follow https://github.com/Azure/AML-Kubernetes/blob/master/docs/ASH/Train-AzureArc.md to create datastore on AML
  2. Go to AML workspace UI and click on dataset from the left side menu
  3. Create a Tabular Dataset from local file
  4. While creating, select datastore you created in step 1 and upload some example csv file from your local computer
  5. When you click next, verification process starts on uploaded file but after a little bit of time you will get the following error:

Any log messages given by the failure

Error: ScriptExecutionException was caused by StreamAccessException.
StreamAccessException was caused by NotFoundException.
Found no resources for the input provided: '[REDACTED]'

at new t (https://ml.azure.com/static/js/index.46397a41.chunk.js:2:914478)
at https://ml.azure.com/static/js/6.44223bf7.chunk.js:2:584684

Expected/desired behavior

Table data should get verified without any errors. When I use Azure instead of ASH storage for creating another datastore in step 1 above and there is no issue in the verification process.

OS and Version?

Windows 10

Versions

Mention any other details that might be useful


Thanks! We'll be in touch soon.

intelligentedge/void-detection-cpu:1.0.0 does not exist

Please provide us with the following information:

This issue is for a: (mark with an x)

- [X ] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

Deploy the out of stock solution following: https://github.com/Azure-Samples/azure-intelligent-edge-patterns/tree/master/edge-ai-void-detection

Any log messages given by the failure

Microsoft.Azure.Devices.Edge.Agent.Edgelet.EdgeletCommunicationException- Message:Error calling Create module voiddetectionbrainwave: Could not create module voiddetectionbrainwave
caused by: Could not pull image intelligentedge/void-detection-cpu:1.0.0

Expected/desired behavior

expected the module to start

OS and Version?

Ubuntu 18.04 LTS

Versions

Mention any other details that might be useful


Thanks! We'll be in touch soon.

processimages module error: ImportError: cannot import name 'BlockBlobService'

Please provide us with the following information:

This issue is for a: (mark with an x)

- [ ] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [X] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

Per 'Azure/azure-storage-python#389', the python library has been changed where BlockBlobService does not exist anymore. The module processimages in blob.py, it makes reference to 'from azure.storage.blob import BlockBlobService, PublicAccess'.

using docker logs processimages you will see:

2020-01-23 16:26:27.922704: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer.so.6'; dlerror: libnvinfer.so.6: cannot open shared object file: No such file or directory
2020-01-23 16:26:27.922921: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer_plugin.so.6'; dlerror: libnvinfer_plugin.so.6: cannot open shared object file: No such file or directory
2020-01-23 16:26:27.922944: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:30] Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
Traceback (most recent call last):
File "./main.py", line 13, in
from blob import BlobUploader
File "/app/blob.py", line 5, in
from azure.storage.blob import BlockBlobService, PublicAccess
ImportError: cannot import name 'BlockBlobService'

To resolve this, add a version statement to requirements.txt such as:

azure-storage-blob==2.1.0

Any log messages given by the failure

2020-01-23 16:26:27.922704: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer.so.6'; dlerror: libnvinfer.so.6: cannot open shared object file: No such file or directory
2020-01-23 16:26:27.922921: W tensorflow/stream_executor/platform/default/dso_loader.cc:55] Could not load dynamic library 'libnvinfer_plugin.so.6'; dlerror: libnvinfer_plugin.so.6: cannot open shared object file: No such file or directory
2020-01-23 16:26:27.922944: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:30] Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
Traceback (most recent call last):
File "./main.py", line 13, in
from blob import BlobUploader
File "/app/blob.py", line 5, in
from azure.storage.blob import BlockBlobService, PublicAccess
ImportError: cannot import name 'BlockBlobService'

Expected/desired behavior

OS and Version?

Ubuntu 18.04

Versions

Mention any other details that might be useful


Add OpenVINO model problem in VOE (factory-ai-vision) : When I replace other model and use the same name , This model will not work in Deployment(no any bounding box)

Please provide us with the following information:

This issue is for a: (mark with an x)

- [ x ] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

Please follow below Linux command.

  1. Go into module menager docker.
    --docker exec -u 0 -it modelmanager bash
  2. Go to downloader
    --cd /app/downloader/tools/downloader
  3. Setup environment
    --./setup.py install
  4. Download model
    --./downloader.py --name face-detection-retail-0005
  5. Copy model to /workspace loaction.
    --cp /app/downloader/tools/downloader/intel/face-detection-retail-0005/FP32/face-detection-retail-0005.bin /workspace/face- detection-retail-0004/1/
    --cp /app/downloader/tools/downloader/intel/face-detection-retail-0005/FP32/face-detection-retail-0005.xml /workspace/face-detection-retail-0004/1/
  6. Remove face-detection-retail-0004 .bin and .xml file
    --rm /workspace/face-detection-retail-0004/1/face-detection-retail-0004.xml
    --rm /workspace/face-detection-retail-0004/1/face-detection-retail-0004.bin
  7. Change face-detection-retail-0005 .bin and .xml name face-detection-retail-0004 .bin and .xml
    --cd /workspace/face-detection-retail-0004/1
    --mv face-detection-retail-0005.bin face-detection-retail-0004.bin
    --mv face-detection-retail-0005.xml face-detection-retail-0004.xml

Any log messages given by the failure

log will show “Validation failed Node: Face Detection has no input with name: data” below in ovmsserver docker container.

Expected/desired behavior

Deployment will show objection detection successful and not show any error log.

OS and Version?

Linux ubuntu 20.04

Versions

Commit version :
OpenVINO model problem in VOE.pptx
e47397d

Mention any other details that might be useful

The source is in factory-ai-vision


Thanks! We'll be in touch soon.

Updating does not keep previous settings

Please provide us with the following information:

This issue is for a: (mark with an x)

- [ x] bug report -> please search issues before submitting
- [ x] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

Updating the modules from 1.1.0 to 1.1.5 does not keep camera settings

Any log messages given by the failure

Expected/desired behavior

All settings are maintained from one version to the next.

OS and Version?

ubuntu 18.04

Versions

1.1.0 -- 1.1.5

Mention any other details that might be useful


Thanks! We'll be in touch soon.

module fails to deploy if we pass empty CV key and endpoint ,,,

How to repro try to deploy a module using batch file and press enter when the prompt ask for

You can use your existing Custom Vision service, or create a new one
Would you like to use an existing Custom Vision Service? Y
Endpoint and key information can be found at www.customvision.ai - settings(top right corner)
Please enter your Custom Vision endpoint:
Please enter your Custom Vision Key:

LOga re here

at Microsoft.Azure.Devices.Edge.Agent.Edgelet.Models.EnvVar..ctor(String key, String value) in /home/vsts/work/1/s/edge-agent/src/Microsoft.Azure.Devices.Edge.Agent.Edgelet/models/EnvVar.cs:line 8
at Microsoft.Azure.Devices.Edge.Agent.Edgelet.Commands.CreateOrUpdateCommand.<>c.b__14_0(KeyValuePair2 m) in /home/vsts/work/1/s/edge-agent/src/Microsoft.Azure.Devices.Edge.Agent.Edgelet/commands/CreateOrUpdateCommand.cs:line 95 at System.Linq.Enumerable.SelectEnumerableIterator2.ToList()
at System.Linq.Enumerable.ToList[TSource](IEnumerable1 source) at Microsoft.Azure.Devices.Edge.Agent.Edgelet.Commands.CreateOrUpdateCommand.GetEnvVars(IDictionary2 moduleEnvVars, IModuleIdentity identity, IConfigSource configSource) in /home/vsts/work/1/s/edge-agent/src/Microsoft.Azure.Devices.Edge.Agent.Edgelet/commands/CreateOrUpdateCommand.cs:line 95
at Microsoft.Azure.Devices.Edge.Agent.Edgelet.Commands.CreateOrUpdateCommand.Build(IModuleManager moduleManager, IModule module, IModuleIdentity identity, IConfigSource configSource, Object settings, Operation operation) in /home/vsts/work/1/s/edge-agent/src/Microsoft.Azure.Devices.Edge.Agent.Edgelet/commands/CreateOrUpdateCommand.cs:line 198
at Microsoft.Azure.Devices.Edge.Agent.Edgelet.EdgeletCommandFactory1.UpdateAsync(Option1 current, IModuleWithIdentity next, IRuntimeInfo runtimeInfo, Boolean start)
at Microsoft.Azure.Devices.Edge.Agent.Core.MetricsCommandFactory.UpdateAsync(IModule current, IModuleWithIdentity next, IRuntimeInfo runtimeInfo) in /home/vsts/work/1/s/edge-agent/src/Microsoft.Azure.Devices.Edge.Agent.Core/metrics/MetricsCommandFactory.cs:line 38
at Microsoft.Azure.Devices.Edge.Agent.Core.LoggingCommandFactory.UpdateAsync(IModule current, IModuleWithIdentity next, IRuntimeInfo runtimeInfo) in /home/vsts/work/1/s/edge-agent/src/Microsoft.Azure.Devices.Edge.Agent.Core/LoggingCommandFactory.cs:line 23
at Microsoft.Azure.Devices.Edge.Agent.Core.Planners.HealthRestartPlanner.ProcessAddedUpdatedModules(IList1 modules, IImmutableDictionary2 moduleIdentities, Func2 createUpdateCommandMaker) in /home/vsts/work/1/s/edge-agent/src/Microsoft.Azure.Devices.Edge.Agent.Core/planners/HealthRestartPlanner.cs:line 250 at Microsoft.Azure.Devices.Edge.Agent.Core.Planners.HealthRestartPlanner.PlanAsync(ModuleSet desired, ModuleSet current, IRuntimeInfo runtimeInfo, IImmutableDictionary2 moduleIdentities) in /home/vsts/work/1/s/edge-agent/src/Microsoft.Azure.Devices.Edge.Agent.Core/planners/HealthRestartPlanner.cs:line 93
at Microsoft.Azure.Devices.Edge.Agent.Core.Agent.ReconcileAsync(CancellationToken token) in /home/vsts/work/1/s/edge-agent/src/Microsoft.Azure.Devices.Edge.Agent.Core/Agent.cs:line 137
<4> 2020-07-22 19:23:10.030 +00:00 [WRN] - Reconcile failed because of the an exception
System.ArgumentException: value is null or whitespace.
at Microsoft.Azure.Devices.Edge.Util.Preconditions.CheckNonWhiteSpace(String value, String paramName) in /home/vsts/work/1/s/edge-util/src/Microsoft.Azure.Devices.Edge.Util/Preconditions.cs:line 191
at Microsoft.Azure.Devices.Edge.Agent.Edgelet.Models.EnvVar..ctor(String key, String value)

Please provide us with the following information:

This issue is for a: (mark with an x)

- [ ] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

Any log messages given by the failure

Expected/desired behavior

OS and Version?

Windows 7, 8 or 10. Linux (which distribution). macOS (Yosemite? El Capitan? Sierra?)

Versions

Mention any other details that might be useful


Thanks! We'll be in touch soon.

Solution not displaying camera real time video RSTP feeds.

Please provide us with the following information:

This issue is for a: (mark with an x)

- [x ] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

Configure the location, camera, and "part" for the solution. On the job configuration tab click configure. Once the configuration is done, the camera tab will open again. On that page getting error stating that the infrancing is not working. Port 5000 is open on the IoT Edge unit.

Any log messages given by the failure

Expected/desired behavior

OS and Version?

Ubuntu 18.04 fresh install

Versions

Mention any other details that might be useful


Thanks! We'll be in touch soon.

Connecting a camera via RTSP issues

Please provide us with the following information:

This issue is for a: (mark with an x)

- [ x] bug report -> please search issues before submitting
- [ ] feature request
- [ x] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

Configure a new camera and provide it with an RTSP://user:password@ipaddress:port

If you use lower case rtsp://user:password@ipaddress:port this works without issues.

Any log messages given by the failure

Error: rtsp is not valid

Expected/desired behavior

Camera added to the solution

OS and Version?

ubuntu 18.04

Versions

1.1.5

Mention any other details that might be useful


Thanks! We'll be in touch soon.

getting 503 error on UX when setting for training stuck with same cam

logs fro n inference module

INFO:azure.iot.device.iothub.auth.iotedge_authentication_provider:Using IoTEdge authentication for {yadavmAiMLGpu.azure-devices.net, garvitaedge, InferenceModule}
INFO:azure.iot.device.iothub.auth.iotedge_authentication_provider:Using IoTEdge authentication for {yadavmAiMLGpu.azure-devices.net, garvitaedge, InferenceModule}
INFO:azure.iot.device.iothub.auth.iotedge_authentication_provider:Using IoTEdge authentication for {yadavmAiMLGpu.azure-devices.net, garvitaedge, InferenceModule}
INFO:azure.iot.device.iothub.auth.iotedge_authentication_provider:Using IoTEdge authentication for {yadavmAiMLGpu.azure-devices.net, garvitaedge, InferenceModule}
INFO:azure.iot.device.iothub.sync_clients:Sending message to output:metrics...
INFO:azure.iot.device.common.pipeline.pipeline_stages_mqtt:MQTTTransportStage(ConnectOperation): connecting
INFO:azure.iot.device.common.mqtt_transport:connecting to mqtt broker
INFO:azure.iot.device.common.mqtt_transport:Connect using port 8883 (TCP)
INFO:azure.iot.device.common.mqtt_transport:connected with result code: 5
INFO:azure.iot.device.common.mqtt_transport:disconnected with result code: 5
INFO:azure.iot.device.common.pipeline.pipeline_stages_mqtt:MQTTTransportStage: _on_mqtt_connection_failure called: Connection Refused: not authorised. caused by None
ERROR:azure.iot.device.common.pipeline.pipeline_ops_base:ConnectOperation: completing with error Connection Refused: not authorised. caused by None
INFO:azure.iot.device.common.mqtt_transport:Forcing paho disconnect to prevent it from automatically reconnecting
ERROR:azure.iot.device.common.pipeline.pipeline_stages_base:ConnectionLockStage(ConnectOperation): op failed. Unblocking queue with error: Connection Refused: not authorised. caused by None
INFO:azure.iot.device.common.pipeline.pipeline_stages_base:ConnectionLockStage(ConnectOperation): processing 0 items in queue
ERROR:azure.iot.device.common.pipeline.pipeline_stages_base:AutoConnectStage(MQTTPublishOperation): Connection failed. Completing with failure because of connection failure: Connection Refused: not authorised. caused by None
ERROR:azure.iot.device.common.pipeline.pipeline_ops_base:MQTTPublishOperation: completing with error Connection Refused: not authorised. caused by None
ERROR:azure.iot.device.common.pipeline.pipeline_ops_base:SendOutputEventOperation: completing with error Connection Refused: not authorised. caused by None
INFO:azure.iot.device.common.pipeline.pipeline_stages_mqtt:MQTTTransportStage: _on_mqtt_disconnect called: The connection was refused. caused by None
WARNING:azure.iot.device.common.pipeline.pipeline_stages_mqtt:MQTTTransportStage: disconnection was unexpected
ERROR:azure.iot.device.common.evented_callback:Callback completed with error Connection Refused: not authorised. caused by None
azure.iot.device.common.transport_exceptions.UnauthorizedError: Connection Refused: not authorised. caused by None
INFO:azure.iot.device.common.handle_exceptions:Unexpected disconnection. Safe to ignore since other stages will reconnect.
[INFO] sending metrics to iothub
[INFO] Sending Image to relabeling person 0 [{"x1": 412, "x2": 811, "y1": 0, "y2": 492}]
[ERROR] Failed to update image for relabeling
[INFO] Sending Image to relabeling person 0 [{"x1": 403, "x2": 818, "y1": 0, "y2": 495}]
[ERROR] Failed to update image for relabeling
[INFO] Sending Image to relabeling person 0 [{"x1": 407, "x2": 831, "y1": 0, "y2": 436}]
[ERROR] Failed to update image for relabeling
[INFO] Sending Image to relabeling person 0 [{"x1": 418, "x2": 827, "y1": 0, "y2": 423}]
[ERROR] Failed to update image for relabeling
[INFO] Sending Image to relabeling person 0 [{"x1": 399, "x2": 832, "y1": 0, "y2": 487}]
[ERROR] Failed to update image for relabeling
[INFO] Sending Image to relabeling person 0 [{"x1": 354, "x2": 751, "y1": 0, "y2": 470}]
[ERROR] Failed to update image for relabeling
[INFO] Sending Image to relabeling person 0 [{"x1": 256, "x2": 564, "y1": 0, "y2": 519}]
[ERROR] Failed to update image for relabeling
[INFO] Sending Image to relabeling person 0 [{"x1": 361, "x2": 756, "y1": 34, "y2": 500}]
[ERROR] Failed to update image for relabeling
[INFO] Sending Image to relabeling person 0 [{"x1": 358, "x2": 756, "y1": 15, "y2": 515}]
[ERROR] Failed to update image for relabeling
[INFO] Sending Image to relabeling person 0 [{"x1": 288, "x2": 678, "y1": 0, "y2": 481}]
[ERROR] Failed to update image for relabeling
[INFO] Sending Image to relabeling person 0 [{"x1": 367, "x2": 751, "y1": 19, "y2": 515}]
[ERROR] Failed to update image for relabeling
[INFO] Sending Image to relabeling person 0 [{"x1": 354, "x2": 760, "y1": 10, "y2": 524}]
[ERROR] Failed to update image for relabeling
[INFO] Sending Image to relabeling person 0 [{"x1": 353, "x2": 760, "y1": 18, "y2": 518}]
[ERROR] Failed to update image for relabeling
[INFO] Sending Image to relabeling person 0 [{"x1": 360, "x2": 754, "y1": 13, "y2": 522}]
[ERROR] Failed to update image for relabeling
[INFO] Sending Image to relabeling person 0 [{"x1": 353, "x2": 758, "y1": 15, "y2": 519}]
[ERROR] Failed to update image for relabeling
[INFO] Sending Image to relabeling person 0 [{"x1": 359, "x2": 753, "y1": 14, "y2": 522}]
[ERROR] Failed to update image for relabeling
[INFO] Sending Image to relabeling person 0 [{"x1": 361, "x2": 754, "y1": 17, "y2": 519}]
[ERROR] Failed to update image for relabeling
[INFO] Sending Image to relabeling person 0 [{"x1": 295, "x2": 668, "y1": 0, "y2": 487}]
[ERROR] Failed to update image for relabeling
[INFO] Sending Image to relabeling person 0 [{"x1": 284, "x2": 675, "y1": 0, "y2": 491}]
[ERROR] Failed to update image for relabeling
[INFO] Sending Image to relabeling person 0 [{"x1": 293, "x2": 668, "y1": 0, "y2": 490}]
[ERROR] Failed to update image for relabeling
[INFO] Sending Image to relabeling person 0 [{"x1": 294, "x2": 668, "y1": 0, "y2": 497}]
[ERROR] Failed to update image for relabeling
[ERROR] Failed to send message to iothub
INFO:azure.iot.device.iothub.sync_clients:Connection State - Disconnected
INFO:azure.iot.device.common.handle_exceptions:azure.iot.device.common.transport_exceptions.ConnectionFailedError: The connection was refused. caused by None

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/opt/miniconda/lib/python3.6/site-packages/azure/iot/device/common/handle_exceptions.py", line 43, in swallow_unraised_exception
raise e
azure.iot.device.common.transport_exceptions.ConnectionDroppedError: None caused by The connection was refused. caused by None

INFO:azure.iot.device.iothub.sync_clients:Cleared all pending method requests due to disconnect
INFO:azure.iot.device.iothub.auth.iotedge_authentication_provider:Using IoTEdge authentication for {yadavmAiMLGpu.azure-devices.net, garvitaedge, InferenceModule}
INFO:azure.iot.device.iothub.auth.iotedge_authentication_provider:Using IoTEdge authentication for {yadavmAiMLGpu.azure-devices.net, garvitaedge, InferenceModule}
INFO:azure.iot.device.iothub.auth.iotedge_authentication_provider:Using IoTEdge authentication for {yadavmAiMLGpu.azure-devices.net, garvitaedge, InferenceModule}

Error configuring the Cognitive Services Enpoint/Key (incl. solution)

This issue is for a: (mark with an x)

- [ ] bug report -> please search issues before submitting
- [ ] feature request
- [x] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

  • Deploy the solution to an Ubuntu 18.04.
  • Go to the settings page and configure the Endpoint and Key
  • After hitting the "Update" button, the error Request failed with status code 500 will appear after a couple of seconds.
  • docker exec -it xyz bash
  • ping westeurope.api.cognitive.microsoft.com will not be resolved

Expected/desired behavior

The endpoint and key will be saved and the available models loaded.

OS and Version?

Ubuntu 18.04

Versions

1.0.9.3

Solution

The docker/moby daemon needs to be configured to use a specific DNS server. See https://l-lin.github.io/post/2018/2018-09-03-docker_ubuntu_18_dns/

Create/edit the file /etc/docker/daemon.json and add your DNS server.

cannot access customvision from factoryai

cannot access my endpoint got below error

File "/usr/local/lib/python3.8/site-packages/requests/adapters.py", line 516, in send
raise ConnectionError(e, request=request)
msrest.exceptions.ClientRequestError: Error occurred in request., ConnectionError: HTTPSConnectionPool(host='westus2.api.cognitive.microsoft.com', port=443): Max retries exceeded with url: /customvision/v3.0/training/domains (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f6bad3b6370>: Failed to establish a new connection: [Errno -2] Name or service not known'))
[09/Jul/2020 13:26:23] "PUT /api/settings/1/ HTTP/1.1" 500 23167
[09/Jul/2020 13:26:23] "GET /setting HTTP/1.1" 200 2273

No Image frame display on the camera page

As the attached screenshot, no image frame display on the page although the inference is still running. No error on the edge module and web site , Any idea about that ? I have been using chrome and edge. the same result.

image

Please provide us with the following information:

This issue is for a: (mark with an x)

- [ ] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

Any log messages given by the failure

Expected/desired behavior

OS and Version?

Windows 7, 8 or 10. Linux (which distribution). macOS (Yosemite? El Capitan? Sierra?)

Versions

Mention any other details that might be useful


Thanks! We'll be in touch soon.

cannot install with old az cli version az version needs to be more then 2.8

Please provide us with the following information:

This issue is for a: (mark with an x)

- [ ] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

Any log messages given by the failure

Expected/desired behavior

OS and Version?

Windows 7, 8 or 10. Linux (which distribution). macOS (Yosemite? El Capitan? Sierra?)

Versions

Mention any other details that might be useful


Thanks! We'll be in touch soon.

[FactoryAI] Using a local password protected IP camera as source for LVA not working

Please provide us with the following information:

This issue is for a: (mark with an x)

- [x] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

  1. Deploy locally on device using LVA path (not opencv)
  2. Navigate to local web app at port 8181
  3. Create new project
  4. Add a local IP camera that has a username and password set up (here I used an Axis camera) - so the RTSP URL would look like this example format: rtsp://myuser:[email protected]/axis-media/media.amp (this is not my real one btw :-) )
  5. Deploy with this project/model and live IP camera
  6. Wait for model to train
  7. Check iotedge logs lvaEdge on device
  8. View that you get Permission denied 401 error and username/password has been stripped from RTSP url (which is probably good, but indicated to me that this seems to be a permission error and somehow my username and password are not being passed appropriately)

Any log messages given by the failure

Check of IoT Edge logs for LVA module shows:

Context: activityId=<some id>
<6> 2021-03-01 21:05:32.396 +00:00 Events: {
  "id": "<some id>",
  "topic": "/subscriptions/<my azure sub>/resourcegroups/<my rg>/providers/microsoft.media/mediaservices/<my ams account>",
  "subject": "/graphInstances/15/sources/rtspSource",
  "eventType": "Microsoft.Media.Graph.Diagnostics.ProtocolError",
  "eventTime": "2021-03-01T21:05:32.387Z",
  "data": {
    "code": "401",
    "target": "rtsp://<my network camera ip>:554/axis-media/media.amp",
    "protocol": "rtsp"
  },
  "dataVersion": "1.0"
};

Expected/desired behavior

In UI, see my resulting identifications in Deployment pane.

OS and Version?

  • UI viewed on macOS Catalina with Safari and Chrome.
  • Solution deployed to NVIDIA Xavier AGX Jetson.

Versions

  • IoT Edge 1.1

Mention any other details that might be useful


Thanks! We'll be in touch soon.

cc @initmahesh @waitingkuo

UX improvements

Please provide us with the following information:

This issue is for a: (mark with an x)

- [ ] bug report -> please search issues before submitting
- [x ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

The UX is a bit confusing, we need more clarity around what sections do what. When looking at the term location...what is that describing, camera is good, but then part...again what is that describing....etc. More clarification would be better.

Any log messages given by the failure

Expected/desired behavior

clear understanding of the different ux sections.

OS and Version?

All

Versions

Mention any other details that might be useful


Thanks! We'll be in touch soon.

[FactoryAI] ASE with FPGA DNS issue

I deployed FactoryAI to ASE-FPGA

image

The docker was built around 2 years ago. At that version, the built-in DNS is case-sensitive.
It's fixed 1.5 years ago. moby/moby#21169

I wonder if it's possible to update the docker version.
thanks!

Typo in the advance tab of the deployment UI

Please provide us with the following information:

This issue is for a: (mark with an x)

- [ x ] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

  1. Deploy the model in the UI
  2. In the advance tab: Minimum Images to Store should instead be Maximum Images to Store.

Any log messages given by the failure

Expected/desired behavior

The UI should say Maximum Images to Store since it is referring to maximum number of images that should be saved for future training.

OS and Version?

OS: Windows 10, Browser: Microsoft Edge Version 87.0.664.66 (Official build) (64-bit)

Versions

Mention any other details that might be useful


Thanks! We'll be in touch soon.

failed during retrain with error on factoryai

Getting this issue

  • When I am trying to retrain,

vision_on_edge.azure_training.models.Project.DoesNotExist: Project matching query does not exist.
[09-Jul-2020 15:55:18] ERROR django.request : Internal Server Error: /api/projects/1/export
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/django/core/handlers/exception.py", line 34, in inner
response = get_response(request)
File "/usr/local/lib/python3.8/site-packages/django/core/handlers/base.py", line 115, in _get_response
response = self.process_exception_by_middleware(e, request)
File "/usr/local/lib/python3.8/site-packages/django/core/handlers/base.py", line 113, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/usr/local/lib/python3.8/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/django/views/generic/base.py", line 71, in view
return self.dispatch(request, *args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/rest_framework/views.py", line 505, in dispatch
response = self.handle_exception(exc)
File "/usr/local/lib/python3.8/site-packages/rest_framework/views.py", line 465, in handle_exception
self.raise_uncaught_exception(exc)
File "/usr/local/lib/python3.8/site-packages/rest_framework/views.py", line 476, in raise_uncaught_exception
raise exc
File "/usr/local/lib/python3.8/site-packages/rest_framework/views.py", line 502, in dispatch
response = handler(request, *args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/rest_framework/decorators.py", line 50, in handler
return func(*args, **kwargs)
File "/app/vision_on_edge/azure_training/api/views.py", line 250, in export
project_obj = Project.objects.get(pk=project_id)
File "/usr/local/lib/python3.8/site-packages/django/db/models/manager.py", line 82, in manager_method
return getattr(self.get_queryset(), name)(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/django/db/models/query.py", line 415, in get
raise self.model.DoesNotExist(
vision_on_edge.azure_training.models.Project.DoesNotExist: Project matching query does not exist.
[09/Jul/2020 15:55:18] "GET /api/projects/1/export HTTP/1.1" 500 15824

Access Physical cameras as well as RTSP

This issue is for a: (mark with an x)

- [ ] bug report -> please search issues before submitting
- [x] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

We are trying to use the Factory-AI-Vision sample, however, RTSP streams have noticeable lag (1-2 seconds in our tests). For our scenario we need as close to real-time as possible. Would it be possible to specify a camera that is directly connected to the host (nVidia JetsonNano) say with host:0 or something instead of an RTSP stream URL when adding a camera?

Expected/desired behavior

I've connected my camera to the Jetson in graphical mode and run the following code with the real-time performance we are looking for so I think it is technologically feasible.

import cv2

cap = cv2.VideoCapture(0)

if (cap.isOpened()== False): 
  print("Error opening video stream or file")

while(cap.isOpened()):
  ret, frame = cap.read()
  if ret == True:
    cv2.imshow('Frame',frame)
    if cv2.waitKey(25) & 0xFF == ord('q'):
      break
  else: 
    break

cap.release()
cv2.destroyAllWindows()

OS and Version?

nVidia Jetson L4T Linux (Jetpack 4.4) running IoTEdge

Mention any other details that might be useful

A hint at the "HostConfig" settings to mimic docker run --device=/dev/video0 in the manifest so I can expose the camera to InferenceModule (I assume its the InferenceModule) on IoTEdge would be helpful as well.

FaceServiceHelper with missing Face Match methods

- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

While using the latest FaceServiceHelper, we are missing "VerifyFaceToFace" and "VerifyFaceToPerson" functionality. In earlier versions of FaceServiceHelper which used "Microsoft.ProjectOxford", we used to have "VerifyAsync" method which uses "IsIdentical" to find if two faces are of same person.

Is there any plan to include these methods? Or is this functionality moved to any other place? If a plan roadmap is provided, i'll be helpful.

Any log messages given by the failure

Unable to perform face match in latest version.

Expected/desired behavior

Manually adding faceClient.Face.VerifyFaceToFaceWithHttpMessagesAsync(firstPerson, secondPerson) in the FaceServiceHelper causes "BadArgument" error to the HTTP request. We expect a way to implement Face Match.

OS and Version?

Windows 10.

Versions

Latest

Mention any other details that might be useful

Azure stack hub storage blob not working with KFServing storage uri

Please provide us with the following information:

This issue is for a: (mark with an x)

- [ ] bug report -> please search issues before submitting
- [x ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

Any log messages given by the failure

Expected/desired behavior

OS and Version?

Windows 7, 8 or 10. Linux (which distribution). macOS (Yosemite? El Capitan? Sierra?)

Versions

Mention any other details that might be useful

KFServing uses storageUri to access external data in its inferenceService custom resource definition. It supports azure storage blob uri patterned as _BLOB_RE = "https://(.+?).blob.core.windows.net/(.+)". But ASH storage blob uri doesn't follow this pattern.


Thanks! We'll be in touch soon.

AML Datastore (hosted on ASH) download error for connected training on ASH

Please provide us with the following information:

This issue is for a: (mark with an x)

- [ x ] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

  1. Follow https://github.com/Azure/AML-Kubernetes/blob/master/docs/ASH/Train-AzureArc.md to create datastore on AML
  2. Run all Cells until before "Create or attach existing ArcKubernetesCompute" section of [https://github.com/Azure/AML-Kubernetes/blob/master/docs/ASH/notebooks/distributed-tf2-cifar10/distributed-tf2-cifar10.ipynb](this sample notebook)
  3. Run the following command in the same notebook above:
dataset = Dataset.get_by_name(ws, name=dataset_name)
dataset.download(target_path='.', overwrite=False)
  1. You will get the error below:

Any log messages given by the failure

UserErrorException: UserErrorException:
Message: Execution failed in operation 'download' for Dataset(id='18838f8e-8dd0-4de4-b922-cc3071d2a35f', name='CIFAR-10', version=1, error_code=ScriptExecution.StreamAccess.Validation,error_message=ScriptExecutionException was caused by StreamAccessException.
StreamAccessException was caused by ValidationException.
'GetHttpResourceStream' for '[REDACTED]' on storage failed with status code 'BadRequest' (The value for one of the HTTP headers is not in the correct format.), client request ID 'c2a3a196-72a9-42f1-9166-6e946f3e32c3', request ID '7c7d8a46-f423-0972-4ded-92df603efee9'. Error message: [REDACTED]Make sure server has no special header requirements or try using different datasource type.
| session_id=436df111-0f04-4071-bd7d-6feaf688878a) ErrorCode: ScriptExecution.StreamAccess.Validation
InnerException
Error Code: ScriptExecution.StreamAccess.Validation
Validation Error Code: BadRequest
Validation Target: HttpRequest
Failed Step: d05d7c55-ae86-4734-858e-e10368436345
Error Message: ScriptExecutionException was caused by StreamAccessException.
StreamAccessException was caused by ValidationException.
'GetHttpResourceStream' for 'https://stackstorage.blob.orlando.azurestack.corp.microsoft.com/datasets/UI/01-15-2021_014825_UTC/dataset.csv' on storage failed with status code 'BadRequest' (The value for one of the HTTP headers is not in the correct format.), client request ID 'c2a3a196-72a9-42f1-9166-6e946f3e32c3', request ID '7c7d8a46-f423-0972-4ded-92df603efee9'. Error message: InvalidHeaderValueThe value for one of the HTTP headers is not in the correct format.
RequestId:7c7d8a46-f423-0972-4ded-92df603efee9
Time:2021-01-22T00:22:52.5839090Zx-ms-version2019-12-12Make sure server has no special header requirements or try using different datasource type.
| session_id=436df111-0f04-4071-bd7d-6feaf688878a
ErrorResponse
{
"error": {
"code": "UserError",
"message": "Execution failed in operation 'download' for Dataset(id='18838f8e-8dd0-4de4-b922-cc3071d2a35f', name='CIFAR-10', version=1, error_code=ScriptExecution.StreamAccess.Validation,error_message=ScriptExecutionException was caused by StreamAccessException.\r\n StreamAccessException was caused by ValidationException.\r\n 'GetHttpResourceStream' for '[REDACTED]' on storage failed with status code 'BadRequest' (The value for one of the HTTP headers is not in the correct format.), client request ID 'c2a3a196-72a9-42f1-9166-6e946f3e32c3', request ID '7c7d8a46-f423-0972-4ded-92df603efee9'. Error message: [REDACTED]Make sure server has no special header requirements or try using different datasource type.\r\n| session_id=436df111-0f04-4071-bd7d-6feaf688878a) ErrorCode: ScriptExecution.StreamAccess.Validation"
}
}

Expected/desired behavior

Dataset files should get downloaded without any errors. Instead of ASH storage, I used Azure storage for creating the datastore in step 1 above and there was no issue.

OS and Version?

Windows 10

Versions

Mention any other details that might be useful


Thanks! We'll be in touch soon.

Bug 37212463: [EFLOW][Azure Percept HCI] The screen seems stuck that causes the confusion of if function break when deployed VOE solution and choose “counting objects” template

Please provide us with the following information:

This issue is for a: (mark with an x)

- [x ] bug report -> please search issues before submitting
- [ ] feature request
- [ ] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

Testing Steps:

Completed the Azure Percept Workload on Azure Stack HCI
Deployed the Vision on Edge (VOE) solution
Open your browser and navigate to http://ip address:8181
On VOE portal, click “Scenario library” button -> choose “Counting objects” template -> Click “Deploy scenario >”
Use the default setting, then click “Deploy” button

Any log messages given by the failure

Test Result:

In step5, the stream is short and will repeat, however, before reloading, screen seems stuck that causes the confusion of if function break

(ex : attached counting_objects.avi, 00:32~01:00)
More details here: [https://microsoft.visualstudio.com/OS/_workitems/edit/37212463]

Expected/desired behavior

Please update to shorten the time between two streams or display message like “waiting for reloading” to avoid the confusion

OS and Version?

Windows 7, 8 or 10. Linux (which distribution). macOS (Yosemite? El Capitan? Sierra?)

Versions

Mention any other details that might be useful


Thanks! We'll be in touch soon.

Need prerequisite detail in readme

Please provide us with the following information:

This issue is for a: (mark with an x)

- [ ] bug report -> please search issues before submitting
- [ ] feature request
- [ x] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Under prerequisites for edge-ai-void-detection
it says "Ensure that the Data Box Edge can run Project Brainwave workloads," but I couldn't
find how to do this.

Definition of "Part"

Please provide us with the following information:

This issue is for a: (mark with an x)

- [ ] bug report -> please search issues before submitting
- [x ] feature request
- [ x] documentation issue or request
- [ ] regression (a behavior that used to work and stopped in a new release)

Minimal steps to reproduce

In the side menu view the "part" is unclear. This should be relabeled something like "Model Reference" or "Model Training" etc.

Any log messages given by the failure

No

Expected/desired behavior

Unclear on what it is referencing to.

OS and Version?

Windows 7, 8 or 10. Linux (which distribution). macOS (Yosemite? El Capitan? Sierra?)

Versions

Mention any other details that might be useful


Thanks! We'll be in touch soon.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.