Coder Social home page Coder Social logo

tech-immersion-data-ai's Introduction

Tech Immersion Mega Data & AI Workshop

Setup: Connecting to your VM with Remote Desktop

When setting up a cloud infrastructure, it's a good idea to create a VM where you can install all of the tools and applications you need to manage your environment. This is called a jumpbox. A jumpbox has been created for you. In order to use it, you need to open Remote Desktop Connection.

You will need three pieces of information:

  • Jump VM DNS Name
  • VM Admin User Name
  • VM Admin Password

Start Remote Desktop and enter the Jump DNS Name as the computer name. When you are asked to enter your credentials, click "More Choices". Click "Use a different account".

In the user name box, enter your VM Admin User Name. Enter your VM Admin Password as the password. Click "Connect". When the warning comes up, click "Yes." You are now connected to your jumpbox and can continue with each individual experience.

Data: Data-focused

  • Data, Experience 1 - Business critical performance and security with SQL Server 2019

    This experience will highlight the new features of SQL Server 2019 with a focus on performance and security. You will begin by performing an assessment of Contoso Auto's on-premises database to determine feasibility for migrating to SQL Server on a VM in Azure, and then complete the database migration. Next, you will gain hands-on experience by running queries using some of the new query performance enhancements and evaluating the results. You will evaluate the data security and compliance features provided by SQL Server 2019 by using the Data Discovery & Classification tool in SSMS to identify tables and columns with PII and GDPR-related compliance issues. You will then address some of the security issues by layering on dynamic data masking, row-level security, and Always Encrypted with secure enclaves.

  • Data, Experience 2 - Handling Big Data with SQL Server 2019 Big Data Clusters

    Highlight the new features of SQL Server 2019 with a focus on Big Data Clusters and data virtualization. Attendees will gain hands-on experience with querying both structured and unstructured data in a unified way using T-SQL. This capability will be illustrated by joining different data sets, such as product stock data in flat CSV files in Azure Storage, product reviews stored in Azure SQL Database, and transactional data in SQL Server 2019 for exploratory data analysis within Azure Data Studio. This joined data will be prepared into a table used for reporting, highlighting query performance against this table due to intelligent query processing. With the inclusion of Apache Spark packaged with Big Data Clusters, it is now possible to use Spark to train machine learning models over data lakes and use those models in SQL Server in one system. Attendees will learn how to use Azure Data Studio to work with Jupyter notebooks to train a simple model that can predict vehicle battery lifetime, train a simple model that can predict vehicle battery lifetime, score new data and save the result as an external table. Finally, attendees will experience the data security and compliance features provided by SQL Server 2019 by using the Data Discovery & Classification tool in SSMS to identify tables and columns with PII and GDPR-related compliance issues, then address the issues by layering on dynamic data masking to identified columns.

  • Data, Experience 3 - Unlocking new capabilities with friction-free migrations to Azure SQL Managed Instance

    Show how databases previously prevented from using PaaS services can be migrated to SQL MI and take advantage of features only available in Azure. Migrate an on-premises parts catalog database, currently running on SQL Server 2012 and using Service Broker, to SQL MI. Create an online secondary database for reporting on operations and finance using SQL MI, using transactional replication.

  • Data, Experience 4 - Leveraging Cosmos DB for near real-time analytics

    In this experience, attendees will use Azure Cosmos DB to ingest streaming vehicle telemetry data as the entry point to a near real-time analytics pipeline built on Cosmos DB, Azure Functions, Event Hubs, Azure Stream Analytics, and Power BI. To start, attendees will complete performance-tuning on Cosmos DB to prepare it for data ingest, and use the change feed capability of Cosmos DB to trigger Azure Functions for data processing. The function will enrich the telemetry data with location information, then send it to Event Hubs. Azure Stream Analytics extracts the enriched sensor data from Event Hubs, performs aggregations over windows of time, then sends the aggregated data to Power BI for data visualization and analysis. A vehicle telemetry data generator will be used to send vehicle telemetry data to Cosmos DB.

  • Data, Experience 5 - Simplifying data movement with Azure Data Factory

    In this experience, attendees will learn how to use Azure Data Factory to help Contoso Auto easily create pipelines that orchestrate data movement.

  • Data, Experience 6 - Delivering the Modern Data Warehouse with Azure SQL Data Warehouse, Azure Databricks, and Power BI

    A modern data warehouse lets you bring together all your data at any scale easily, and to get insights through analytical dashboards, operational reports, or advanced analytics for all your users. In this experience we will demonstrate how to transform data gathered from various sources, including Cosmos DB, into Azure Data Lake Storage Gen2, Azure Databricks and Azure SQL DW to build a modern data warehouse.

  • Data, Experience 7 - Open Source Databases at Scale

    In this experience, attendees will use advanced features of the managed PostgreSQL PaaS service on Azure to make Conto Auto's database more scalable and able to handle the rapid ingest of streaming data while simultaneously generating and serving pre-aggregated data for reports.

AI: AI & Machine Learning-focused

  • AI, Experience 1 - Quickly build comprehensive Bot solutions with the Virtual Assistant Solution Accelerator

    Show how the Virtual Assistant Solution accelerator can rapidly accelerate developing conversation bots. This exercise will use the automotive Virtual Assistant starter solution, which converts the user’s speech to actions, such as controlling the vehicle’s climate settings and radio. Attendees will register a new skill that monitors car sensor data and alerts the driver when there is a potential problem with the vehicle. Part of the process is to create an Adaptive Card to show vehicle data, recommendation for service (call out to function to get battery replacement prediction), and an option to contact the nearest service center. To entice the driver to service the car at that time, the bot will have them select a gift card of their choice that will give them a promo code for a coupon at that service center.

  • AI, Experience 2 - Yield quick insights from unstructured data with Knowledge Mining and Cognitive Services

    Highlight how building a cognitive search pipeline using Cognitive Services and Knowledge Mining can yield quick insights into unstructured data. Cognitive search is an AI feature in Azure Search, used to extract text from images, blobs, and other unstructured data sources - enriching the content to make it more searchable in an Azure Search index. Attendees will create a cognitive search pipeline in Azure Search, using Cosmos DB and an Azure Storage account as data sources, and apply pre-configured and custom cognitive skills to enrich the data in the indexing pipeline.

  • AI, Experience 3 - Better models made easy with Automated Machine Learning

    Show how automated ML capability in Azure Machine Learning can be used for Life Cycle Management of the manufactured vehicles and how AML helps in creation of better vehicle maintenance plans. Attendees will train a Linear Regression model to predict the number of days until battery failure using Automated Machine Learning the visual interface from the Azure Portal and optionally from within Jupyter Notebooks. They will also use the model interpretability features of the Azure Machine Learning Python SDK to understand which features have the greatest impact on the model's predictions.

  • AI, Experience 4 - Creating repeatable processes with Azure Machine Learning pipelines

    Attendees will learn how Contoso Auto can benefit from creating re-usable machine learning pipelines with Azure Machine Learning.

  • AI, Experience 5 - Making deep learning portable with ONNX

    Attendees will experience how Contoso Auto can leverage Deep Learning technologies to scan through their vehicle specification documents to find compliance issues with new regulations. Then they will deploy this model, standardizing operationalization with ONNX. They will see how this simplifies inference runtime code, enabling pluggability of different models and targeting a broad range of runtime environments from Linux based web services to Windows/.NET based apps.

  • AI, Experience 6 - MLOps with Azure Machine Learning and Azure DevOps

    Attendees will experience how Contoso Auto can use MLOps to formalize the process of training and deploying new models using a DevOps (CI/CD) approach.

tech-immersion-data-ai's People

Contributors

ciprianjichici avatar fasalzaman avatar ghumannavneet avatar givenscj avatar himanshuahlawat31 avatar ikeellis avatar itsadityasethi avatar joelhulen avatar junnhssn avatar kylebunting avatar mdmaarschalk avatar praveenanil avatar roxanagoidaci avatar shirolkar avatar stevecoding avatar tejaswini972 avatar zoinertejada avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tech-immersion-data-ai's Issues

Error importing AutoMLExplainerSetupClass, automl_setup_model_explanations

The below line
from azureml.train.automl.runtime.automl_explain_utilities import AutoMLExplainerSetupClass, automl_setup_model_explanations
on this file
throws the below error. As a result notebook and demo cannot run.

error                                     Traceback (most recent call last)
<ipython-input-5-78e6c8769ece> in <module>
      9 from azureml.train.automl.run import AutoMLRun
     10 
---> 11 from azureml.train.automl.runtime.automl_explain_utilities import AutoMLExplainerSetupClass, automl_setup_model_explanations
     12 from interpret_community.mimic.models import LGBMExplainableModel
     13 from azureml.interpret.mimic_wrapper import MimicWrapper

/anaconda/envs/azureml_py36/lib/python3.6/site-packages/azureml/train/automl/runtime/automl_explain_utilities.py in <module>
     18 
     19 from azureml.automl.core import dataprep_utilities, package_utilities
---> 20 from azureml.automl.runtime.featurization.streaming import StreamingFeaturizationTransformer
     21 from azureml.automl.runtime.training_utilities import LargeDatasetLimit
     22 from azureml.data import TabularDataset

/anaconda/envs/azureml_py36/lib/python3.6/site-packages/azureml/automl/runtime/featurization/__init__.py in <module>
      6 
      7 # Data transformer
----> 8 from .data_transformer import DataTransformer
      9 
     10 

/anaconda/envs/azureml_py36/lib/python3.6/site-packages/azureml/automl/runtime/featurization/data_transformer.py in <module>
     42 from ..stats_computation import PreprocessingStatistics as _PreprocessingStatistics
     43 
---> 44 from ..featurizer.transformer import AutoMLTransformer, CategoricalFeaturizers, DateTimeFeaturesTransformer,\
     45     GenericFeaturizers, get_ngram_len, TextFeaturizers
     46 from azureml.automl.core.featurization.featurizationconfig import FeaturizationConfig

/anaconda/envs/azureml_py36/lib/python3.6/site-packages/azureml/automl/runtime/featurizer/transformer/__init__.py in <module>
     21 
     22 # Text
---> 23 from .text import get_ngram_len, NaiveBayes, StringCastTransformer, max_ngram_len, \
     24     TextFeaturizers, WordEmbeddingTransformer, TFIDF_VECTORIZER_CONFIG, NimbusMLTextTargetEncoder, \
     25     BagOfWordsTransformer, StatsTransformer, StringConcatTransformer, BiLSTMAttentionTransformer

/anaconda/envs/azureml_py36/lib/python3.6/site-packages/azureml/automl/runtime/featurizer/transformer/text/__init__.py in <module>
      6 from .stringcast_transformer import StringCastTransformer
      7 from .utilities import get_ngram_len, max_ngram_len
----> 8 from .text_featurizers import TextFeaturizers
      9 from .wordembedding_transformer import WordEmbeddingTransformer
     10 from .pretrained_text_dnn_transformer import PretrainedTextDNNTransformer

/anaconda/envs/azureml_py36/lib/python3.6/site-packages/azureml/automl/runtime/featurizer/transformer/text/text_featurizers.py in <module>
     25 from ..generic.modelbased_target_encoder import ModelBasedTargetEncoder
     26 from ..featurization_utilities import if_package_exists
---> 27 from .bilstm_attention_transformer import BiLSTMAttentionTransformer
     28 
     29 from .constants import NIMBUS_ML_PARAMS

/anaconda/envs/azureml_py36/lib/python3.6/site-packages/azureml/automl/runtime/featurizer/transformer/text/bilstm_attention_transformer.py in <module>
     32 
     33 if pkg_dependencies_satisfied:
---> 34     en_tokenize = en_core_web_sm.load()
     35 
     36     class BaseModel(torch.nn.Module):

/anaconda/envs/azureml_py36/lib/python3.6/site-packages/en_core_web_sm/__init__.py in load(**overrides)
     10 
     11 def load(**overrides):
---> 12     return load_model_from_init_py(__file__, **overrides)

/anaconda/envs/azureml_py36/lib/python3.6/site-packages/spacy/util.py in load_model_from_init_py(init_file, **overrides)
    188     if not model_path.exists():
    189         raise IOError(Errors.E052.format(path=path2str(data_path)))
--> 190     return load_model_from_path(data_path, meta, **overrides)
    191 
    192 

/anaconda/envs/azureml_py36/lib/python3.6/site-packages/spacy/util.py in load_model_from_path(model_path, meta, **overrides)
    171             component = nlp.create_pipe(name, config=config)
    172             nlp.add_pipe(component, name=name)
--> 173     return nlp.from_disk(model_path)
    174 
    175 

/anaconda/envs/azureml_py36/lib/python3.6/site-packages/spacy/language.py in from_disk(self, path, exclude, disable)
    789             # Convert to list here in case exclude is (default) tuple
    790             exclude = list(exclude) + ["vocab"]
--> 791         util.from_disk(path, deserializers, exclude)
    792         self._path = path
    793         return self

/anaconda/envs/azureml_py36/lib/python3.6/site-packages/spacy/util.py in from_disk(path, readers, exclude)
    628         # Split to support file names like meta.json
    629         if key.split(".")[0] not in exclude:
--> 630             reader(path / key)
    631     return path
    632 

/anaconda/envs/azureml_py36/lib/python3.6/site-packages/spacy/language.py in <lambda>(p)
    779         deserializers["meta.json"] = lambda p: self.meta.update(srsly.read_json(p))
    780         deserializers["vocab"] = lambda p: self.vocab.from_disk(p) and _fix_pretrained_vectors_name(self)
--> 781         deserializers["tokenizer"] = lambda p: self.tokenizer.from_disk(p, exclude=["vocab"])
    782         for name, proc in self.pipeline:
    783             if name in exclude:

tokenizer.pyx in spacy.tokenizer.Tokenizer.from_disk()

tokenizer.pyx in spacy.tokenizer.Tokenizer.from_bytes()

/anaconda/envs/azureml_py36/lib/python3.6/re.py in compile(pattern, flags)
    231 def compile(pattern, flags=0):
    232     "Compile a regular expression pattern, returning a pattern object."
--> 233     return _compile(pattern, flags)
    234 
    235 def purge():

/anaconda/envs/azureml_py36/lib/python3.6/re.py in _compile(pattern, flags)
    299     if not sre_compile.isstring(pattern):
    300         raise TypeError("first argument must be string or compiled pattern")
--> 301     p = sre_compile.compile(pattern, flags)
    302     if not (flags & DEBUG):
    303         if len(_cache) >= _MAXCACHE:

/anaconda/envs/azureml_py36/lib/python3.6/sre_compile.py in compile(p, flags)
    560     if isstring(p):
    561         pattern = p
--> 562         p = sre_parse.parse(p, flags)
    563     else:
    564         pattern = None

/anaconda/envs/azureml_py36/lib/python3.6/sre_parse.py in parse(str, flags, pattern)
    853 
    854     try:
--> 855         p = _parse_sub(source, pattern, flags & SRE_FLAG_VERBOSE, 0)
    856     except Verbose:
    857         # the VERBOSE flag was switched on inside the pattern.  to be

/anaconda/envs/azureml_py36/lib/python3.6/sre_parse.py in _parse_sub(source, state, verbose, nested)
    414     while True:
    415         itemsappend(_parse(source, state, verbose, nested + 1,
--> 416                            not nested and not items))
    417         if not sourcematch("|"):
    418             break

/anaconda/envs/azureml_py36/lib/python3.6/sre_parse.py in _parse(source, state, verbose, nested, first)
    525                     break
    526                 elif this[0] == "\\":
--> 527                     code1 = _class_escape(source, this)
    528                 else:
    529                     code1 = LITERAL, _ord(this)

/anaconda/envs/azureml_py36/lib/python3.6/sre_parse.py in _class_escape(source, escape)
    334         if len(escape) == 2:
    335             if c in ASCIILETTERS:
--> 336                 raise source.error('bad escape %s' % escape, len(escape))
    337             return LITERAL, ord(escape[1])
    338     except ValueError:

error: bad escape \p at position 257

Directory already mounted

Hi , to avoid Directory already mounted can we make some changes in notebook as: /mnt/contosoauto"
On Cmd 4 --- Add "# UnMount" before "# Mount the ADLS Gen2 filesystem" and Run All

UnMount

dbutils.fs.unmount("/mnt/" + fileSystemName

Task 3 Step 7: name parameter is brittle, error unhelpful

In step 7 of task 3, the deploy_bot.ps1 script takes parameters. The name parameter is not a free-text string and you have to get this exactly right.

While the instructions say the name should be of the form tech-immersion-<YOUR_UNIQUE_IDENTIFIER> the example in the image is of the form tech-immersion-bot-<IDENTIFIER> or at least it appears to be.

If you name the thing something like tech-immersion-bot-9000 the script fails at the
az group create -g … phase with an authorisation error, which doesn't make it clear that the issue is with the value for the name parameter.

Instructions/screenshots needs to be updated

we have to update the Screenshots as per the latest Azure UI

Task:3 Step: 6
image

Task: 3 Step: 10
image

Task: 3 Step: 13
image

Task: 5 Step: 4
image

Task: 6 Step: 4
image

Task: 7 Step: 1
image

Task: 8 Step: 4
image

Screenshots not showing up inside the notebooks in data-exp6

Hello Team,

  1. Unable to view "Attach notebook to you cluster" screenshots inside all the notebooks.

2021-05-26_4-58-17

  1. "Display the above visualizations on a Databricks dashboard" screenshots missing in notebook "5-Databricks-Dashboards"

2021-05-26_4-57-36

Please update these. Thanks

AI, Experience 1 - Quickly build comprehensive Bot solutions with the Virtual Assistant solution accelerator

I followed exactly the described steps and while running the deployment script

pwsh -ExecutionPolicy Bypass -File Deployment\Scripts\deploy.ps1

Error Message:
Found an existing application instance of "xxxx". We will patch it
Host name in property identifierUris is not on any verified domain of the company or its subdomain.
! Could not provision Microsoft App Registration automatically. Review the log for more information.
! Log: C:\Users\masplitt\OneDrive - Microsoft\Fy21Data-CodeAndOther\sourcefy21\tech-immersion-data-ai\lab-files\ai\1\automotiveskill\Deployment\Scripts\..\deploy_log.txt

The log file is empty. Did I miss a part of the setup guide? And I don't have access to any preconfigured lab environment yet.
Thank you for your help!

pipeline getting failed

In Exercise 2 task 2 , step 2 pipeline is getting failed and error prompt that we are getting is cli version is outdated
image

ai-exp6 - knack.util.CLIError: Could not retrieve credential from local cache for service principal ***. Run `az login` for this service principal.

Task 2: Run the Build Pipeline fails in 'Get or create AML Compute Target' step with:

2022-01-06T05:53:56.435590] Logging experiment finalizing status in history service.
Starting the daemon thread to refresh tokens in background for process with pid = 3694
Cleaning up all outstanding Run operations, waiting 300.0 seconds
1 items cleaning up...
Cleanup took 0.1008906364440918 seconds
Traceback (most recent call last):
File "/home/vsts/.azureml/envs/azureml_b54a7a69e83d04a100a32a3a44b87b62/lib/python3.6/site-packages/azureml/core/authentication.py", line 2283, in _get_arm_token_with_refresh
access_token = profile_object.get_raw_token(resource=resource)[0][1]
File "/home/vsts/.azureml/envs/azureml_b54a7a69e83d04a100a32a3a44b87b62/lib/python3.6/site-packages/azure/cli/core/_profile.py", line 383, in get_raw_token
credential = self._create_credential(account, tenant)
File "/home/vsts/.azureml/envs/azureml_b54a7a69e83d04a100a32a3a44b87b62/lib/python3.6/site-packages/azure/cli/core/_profile.py", line 597, in _create_credential
return identity.get_service_principal_credential(username_or_sp_id)
File "/home/vsts/.azureml/envs/azureml_b54a7a69e83d04a100a32a3a44b87b62/lib/python3.6/site-packages/azure/cli/core/auth/identity.py", line 236, in get_service_principal_credential
entry = self._service_principal_store.load_entry(client_id, self.tenant_id)
File "/home/vsts/.azureml/envs/azureml_b54a7a69e83d04a100a32a3a44b87b62/lib/python3.6/site-packages/azure/cli/core/auth/identity.py", line 320, in load_entry
.format(sp_id))
knack.util.CLIError: Could not retrieve credential from local cache for service principal ***. Run az login for this service principal.

Error from AML driver log:
n create_aml_cluster.py
Azure ML SDK version: 1.37.0
Argument 1: gpucluster
Argument 2: /home/vsts/work/1/s
creating AzureCliAuthentication...
done creating AzureCliAuthentication!
get workspace...

[2022-01-06T05:53:56.425879] The experiment failed. Finalizing run...
[2022-01-06T05:53:56.425904] Start FinalizingInRunHistory
[2022-01-06T05:53:56.435590] Logging experiment finalizing status in history service.
Starting the daemon thread to refresh tokens in background for process with pid = 3694
Cleaning up all outstanding Run operations, waiting 300.0 seconds
1 items cleaning up...
Cleanup took 0.1008906364440918 seconds
Traceback (most recent call last):
File "/home/vsts/.azureml/envs/azureml_b54a7a69e83d04a100a32a3a44b87b62/lib/python3.6/site-packages/azureml/core/authentication.py", line 2283, in _get_arm_token_with_refresh
access_token = profile_object.get_raw_token(resource=resource)[0][1]
File "/home/vsts/.azureml/envs/azureml_b54a7a69e83d04a100a32a3a44b87b62/lib/python3.6/site-packages/azure/cli/core/_profile.py", line 383, in get_raw_token
credential = self._create_credential(account, tenant)
File "/home/vsts/.azureml/envs/azureml_b54a7a69e83d04a100a32a3a44b87b62/lib/python3.6/site-packages/azure/cli/core/_profile.py", line 597, in _create_credential
return identity.get_service_principal_credential(username_or_sp_id)
File "/home/vsts/.azureml/envs/azureml_b54a7a69e83d04a100a32a3a44b87b62/lib/python3.6/site-packages/azure/cli/core/auth/identity.py", line 236, in get_service_principal_credential
entry = self._service_principal_store.load_entry(client_id, self.tenant_id)
File "/home/vsts/.azureml/envs/azureml_b54a7a69e83d04a100a32a3a44b87b62/lib/python3.6/site-packages/azure/cli/core/auth/identity.py", line 320, in load_entry
.format(sp_id))
knack.util.CLIError: Could not retrieve credential from local cache for service principal XXXXXXXXXXXXX. Run az login for this service principal.

Looks like the error is caused by breaking changing in azcli with tokens: Azure/cli#56

Pipeline failure in AI, Experience 3

Exercise 1 Task2 Step1:
Pipeline failed with the below error.

Error: AzureMLCompute job failed.
AcrRefreshTokenRetrievalError: Unable to retrieve ACR refresh token using Compute Identity a91a298d-b0ce-49e8-beeb-aa5d398b4d95, please check if compute identity has access to ACR.
Reason: Retrieving ACR refresh token using identity failed
err: Unknown server error.
serviceURL: viennaglobal.azurecr.io
imageName: viennaglobal.azurecr.ioviennaglobal.azurecr.io/azureml/azureml_4f3cee89203e005745d1830c04fe722a
err: Run docker command to pull private image failed with error: error response from daemon: get https://viennaglobal.azurecr.ioviennaglobal.azurecr.io/v2/: dial tcp: lookup viennaglobal.azurecr.ioviennaglobal.azurecr.io on 168.63.129.16:53: no such host
.
Reason: error response from daemon: get https://viennaglobal.azurecr.ioviennaglobal.azurecr.io/v2/: dial tcp: lookup viennaglobal.azurecr.ioviennaglobal.azurecr.io on 168.63.129.16:53: no such host

Info: Failed to prepare an environment for the job execution: Job environment preparation failed on 10.0.0.4 with err exit status 1.

Please find the attatched screenshot for reference.

ai3isuue

Can you please check this.

Screenshot missing

• Screenshot is missing in Exercise 1 Task 1 step 13
2019-11-21_23-50-08
•There are two options for creating a data flow , the guide has not mentioned which option to select,please update the same.
2019-11-21_16-23-22

More commentary and instruction at the start

https://github.com/solliancenet/tech-immersion-data-ai/blob/master/ai-exp3/README.md#exercise-1-creating-a-model-using-automated-machine-learning

Starts with "Navigate to your Azure Machine Learning workspace in the Azure Portal." Many attendees open their local browser at that point or are blocked due to a lack of Azure experience. More commentary and instruction would be great before that step, including:

  • How to start the VM
  • How to overcome gateway warnings
  • How to open the Azure portal
  • How to authenticate to the Azure portal
  • Where to locate the credentials
  • How to navigate to the Azure Machine Learning workspace

Screenshots and instructions update :Data experience 4

Task 3 step 6, instruction should be added to select value for Event Hub Consumer group ( either Create new or use existing).

Screenshot (151)

Task 3 step 22, screenshot update

Screenshot (116)

Task3 step 10,12,14 and 16 , screenshot update

Screenshot (118)

Task 4 step 8, screenshot update

Screenshot (147)

Task 5 step4 ,instruction and screenshot update

Screenshot (123)

Task6 step4, instruction and screenshot update

Screenshot (150)

Task6 step5, instruction can be added to select Code+Test from the left hand menu of CarEventProcessorRegion1, to view function.json

Task7 step1, screenshot update

Screenshot (148)

Form indexer is showing as failed in Azure portal

In Task 9:(Create Forms Recognizer Pipeline) step 4 is working fine i.e getting a message as successfully trained form recognizer model on console but it is showing as failed in portal(step 5)
Please look after the issue as soon as possible.
Here is the screenshot attached.
aie2issue
Thank you.

Deployment error (task 2, step 6-8)

Hi

We're following the instructions for the deployment and get the following

C:\lab-files\ai\1\automotiveskill\automotiveskill>pwsh -ExecutionPolicy Bypass -File Deployment\Scripts\deploy.ps1
? Bot Name (used as default name for resource group and deployed resources): ti-116901
? Azure resource group region: westus
? Password for MSA app registration (must be at least 16 characters long, contain at least 1 special character, and contain at least 1 numeric character): Abc!1234567890
? LUIS Authoring Region (westus, westeurope, or australiaeast): westus
? LUIS Authoring Key (found at https://luis.ai/user/settings): 926*9f1
Found an existing application instance of "962513b4
94453872". We will patch it

Creating resource group ...
Validating Azure deployment ...
Deploying Azure services (this could take a while)...
Deployment failed. Correlation ID: ac8a45**************90ee9. {
"Code": "Unauthorized",
"Message": "The scale operation is not allowed for this subscription in this region. Try selecting different region or scale option.",
"Target": null,
"Details": [
{
"Message": "The scale operation is not allowed for this subscription in this region. Try selecting different region or scale option."
},
{
"Code": "Unauthorized"
},
{
"ErrorEntity": {
"ExtendedCode": "52020",
"MessageTemplate": "The scale operation is not allowed for this subscription in this region. Try selecting different region or scale option.",
"Parameters": [
"default"
],
"Code": "Unauthorized",
"Message": "The scale operation is not allowed for this subscription in this region. Try selecting different region or scale option."
}
}
],
"Innererror": null
}
! Deployment failed. Please refer to the log file for more information.
! Log: C:\lab-files\ai\1\automotiveskill\automotiveskill\Deployment\Scripts..\deploy_log.txt

  • To delete this resource group, run 'az group delete -g ti-116901 --no-wait'

C:\lab-files\ai\1\automotiveskill\automotiveskill>

Issue with importing a repository in Azure DevOps in AI Exp 6

Currently, there is a bug in Azure DevOps while importing a repository to a project.

Here is the workaround I used for MLOps with Azure Machine Learning and Azure DevOps below :

Solution :
Before starting Task 2,

  1. Go to Account settings and Select “Preview Features“

  2. Turn off the “New Repos landing pages” preview feature.

Screenshot Mismatch Issue in AI Exp 6

In the screenshot provided in Exp 1 -> Task 3 -> Step 2, default names of workspace and resource group are provided different compared to the ones provided in the text. This sort of creates confusion if someone goes through the screenshot only.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.