Coder Social home page Coder Social logo

microsoftlearning / mslearn-dp100 Goto Github PK

View Code? Open in Web Editor NEW
616.0 29.0 665.0 777 KB

Lab files for Azure Machine Learning exercises

Home Page: https://microsoftlearning.github.io/mslearn-dp100/

License: MIT License

Jupyter Notebook 99.82% Shell 0.18%

mslearn-dp100's Introduction

Note: This repository is archived as there now is a new version of these labs. The new labs that cover Azure Machine Learning v2 (instead of v1) can be found at: https://github.com/MicrosoftLearning/mslearn-azure-ml.

Azure Machine Learning Lab Exercises

This repository contains the hands-on lab exercises for Microsoft course DP-100 Designing and Implementing a Data Science Solution on Azure and the equivalent self-paced modules on Microsoft Learn. The labs are designed to accompany the learning materials and enable you to practice using the technologies described them.

When using these labs in an instructoe-led course, students should use the Azure subscription provided for the course.

What are we doing?

  • To support this course, we will need to make frequent updates to the course content to keep it current with the Azure services used in the course. We are publishing the lab instructions and lab files on GitHub to keep the content current with changes in the Azure platform.

  • We hope that this brings a sense of collaboration to the labs like we've never had before - when Azure changes and you find it first during a live delivery, go ahead and submit a pull-request to update the lab content. Help your fellow MCTs.

How should I use these files relative to the released MOC files?

  • The instructor guide and PowerPoints are still going to be your primary source for teaching the course content.

  • These files on GitHub are designed to be used in the course labs.

  • It will be recommended that for every delivery, trainers check GitHub for any changes that may have been made to support the latest Azure services.

What about changes to the student handbook?

  • We will review the student handbook on a quarterly basis and update through the normal MOC release channels as needed.

How do I contribute?

  • Any MCT can submit a pull request to the code or content in the GitHub repo, Microsoft and the course author will triage and include content and lab code changes as needed.

  • If you have suggestions or spot any errors, please report them as issues.

Notes

Classroom Materials

The labs are provided in this GitHub repo rather than in the student materials in order to (a.) share them with other learning modalities, and (b.) ensure that the latest version of the lab files is always used in classroom deliveries. This approach reflects the nature of an always-changing cloud-based interface and platform.

Anyone can access the files in this repo, but Microsoft Learning support is limited to MCTs teaching this course only.

mslearn-dp100's People

Contributors

10e42 avatar ammarasmro avatar anthonyoakley avatar azadehkhojandi avatar chaosex avatar cjpluta avatar clifford-smith avatar garjen55 avatar giraygokirmak avatar graememalcolm avatar grantcarthew avatar hales1991 avatar jungealexander avatar madiepev avatar mihai-ac avatar mikkeyboi avatar mtowse avatar resseguie avatar zainhaseeb avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

mslearn-dp100's Issues

issue with 13 - Explore Differential Privacy.ipynb

When running the command !pip install opendp-smartnoise==0.1.3.1, it shows the following error

ERROR: azure-cli 2.23.0 has requirement antlr4-python3-runtime~=4.7.2, but you'll have antlr4-python3-runtime 4.8 which is incompatible.
ERROR: azure-cli 2.23.0 has requirement azure-graphrbac~=0.60.0, but you'll have azure-graphrbac 0.61.1 which is incompatible.
ERROR: azure-cli 2.23.0 has requirement azure-mgmt-containerregistry==3.0.0rc17, but you'll have azure-mgmt-containerregistry 2.8.0 which is incompatible.
ERROR: azure-cli 2.23.0 has requirement azure-mgmt-keyvault==9.0.0, but you'll have azure-mgmt-keyvault 2.2.0 which is incompatible.
ERROR: azure-cli 2.23.0 has requirement azure-mgmt-network~=18.0.0, but you'll have azure-mgmt-network 19.0.0 which is incompatible.

ERROR: azure-cli 2.23.0 has requirement azure-mgmt-storage~=17.1.0, but you'll have azure-mgmt-storage 11.2.0 which is incompatible.
ERROR: azure-cli 2.23.0 has requirement pytz==2019.1, but you'll have pytz 2021.1 which is incompatible.
ERROR: azure-cli 2.23.0 has requirement websocket-client~=0.56.0, but you'll have websocket-client 0.59.0 which is incompatible.
as attached.
UnableToInstallSmartNoise

Can anyone please suggest?

Creating conda environment failed with exit code: -15

05 - Train Models.ipynb produces an error @ the chunk following Run the training script as an experiment.

Kernal: Python 3.6 - AzureML
Compute instance: STANDARD_DS1_V2

When I run:

from azureml.core import Experiment, ScriptRunConfig, Environment
from azureml.core.conda_dependencies import CondaDependencies
from azureml.widgets import RunDetails

# Create a Python environment for the experiment
sklearn_env = Environment("sklearn-env")

# Ensure the required packages are installed (we need pip, scikit-learn and Azure ML defaults)
packages = CondaDependencies.create(conda_packages=['pip', 'scikit-learn'],
                                    pip_packages=['azureml-defaults'])
sklearn_env.python.conda_dependencies = packages

# Create a script config
script_config = ScriptRunConfig(source_directory=training_folder,
                                script='diabetes_training.py',
                                environment=sklearn_env) 

# submit the experiment run
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)

# Show the running experiment run in the notebook widget
RunDetails(run).show()

# Block until the experiment run has completed
run.wait_for_completion()

I get:

---------------------------------------------------------------------------
ActivityFailedException                   Traceback (most recent call last)
<ipython-input-6-080c54dc6996> in <module>
     25 
     26 # Block until the experiment run has completed
---> 27 run.wait_for_completion()

/anaconda/envs/azureml_py36/lib/python3.6/site-packages/azureml/core/run.py in wait_for_completion(self, show_output, wait_post_processing, raise_on_error)
    822 
    823                 if raise_on_error:
--> 824                     raise ActivityFailedException(error_details=json.dumps(error, indent=4))
    825 
    826             return final_details

ActivityFailedException: ActivityFailedException:
	Message: Activity Failed:
{
    "error": {
        "code": "UserError",
        "message": "Creating conda environment failed with exit code: -15",
        "messageParameters": {},
        "details": []
    },
    "time": "2021-06-19T12:21:20.816582Z"
}
	InnerException None
	ErrorResponse 
{
    "error": {
        "message": "Activity Failed:\n{\n    \"error\": {\n        \"code\": \"UserError\",\n        \"message\": \"Creating conda environment failed with exit code: -15\",\n        \"messageParameters\": {},\n        \"details\": []\n    },\n    \"time\": \"2021-06-19T12:21:20.816582Z\"\n}"
    }
}

Missing "pip install --upgrade azureml-sdk[notebooks,automl,explain] --verbose"->Model-Deploymenterror.

Moin Moin.
Upon trying to deploy the best model as the endpoint "auto-predict-diabetes" from "https://microsoftlearning.github.io/mslearn-dp100/instructions/02-automated-ml.html", I did receive an error.:


Failed to load entrypoint automl = azureml.train.automl.run˸AutoMLRun._from_run_dto with exception (cryptography


issuing "pip install --upgrade azureml-sdk[notebooks,automl,explain] --verbose" dis remove it.
_Tschüß,
__Michael.

Fairlearn dashboard does not name disparity charts

The Train a model section contains the following explanation

View the dashboard visualization, which shows:

  • Disparity in performance - how the selected performance metric compares for the subpopulations, including underprediction (false negatives) and overprediction (false positives).
  • Disparity in predictions - A comparison of the number of positive cases per subpopulation.

The "Age | Recall | Demographic parity difference" dashboard looks with fairlearn 0.7.0 like the one below

Screenshot from 2022-04-03 16-58-15

The fairlearn documentation up to and including version 0.6.2 contained a section Fairlearn dashboard which used the concept of disparities, however fairlearn 0.7.0 does not have that section.

fairlearn widget

When running through

https://github.com/MicrosoftLearning/mslearn-dp100/blob/main/15%20-%20Detect%20Unfairness.ipynb

I ran

from fairlearn.widget import FairlearnDashboard

# View this model in Fairlearn's fairness dashboard, and see the disparities which appear:
FairlearnDashboard(sensitive_features=S_test, 
                   sensitive_feature_names=['Age'],
                   y_true=y_test,
                   y_pred={"diabetes_model": diabetes_model.predict(X_test)})

the info above suggest that a widget will appear

"Run the cell below (note that a warning about future changes may be displayed - ignore this for now).
When the widget is displayed, use the Get started link to start configuring your visualization."

However, when I run it I just see


/anaconda/envs/azureml_py36/lib/python3.6/site-packages/fairlearn/widget/_fairlearn_dashboard.py:47: UserWarning: The FairlearnDashboard will move from Fairlearn to the raiwidgets package after the v0.5.0 release. Instead, Fairlearn will provide some of the existing functionality through matplotlib-based visualizations.
  warn("The FairlearnDashboard will move from Fairlearn to the "
<fairlearn.widget._fairlearn_dashboard.FairlearnDashboard at 0x7f44d83d8e48>

Screen Shot 2020-12-12 at 8 37 55 PM

I'm unsure how to see the widget

09: Real time inferencing service -> WebServiceException during service deployment

Here is a snippet from the error message:
2021-08-06 11:20:02+00:00 Checking the status of deployment diabetes-service.
Failed
Service deployment polling reached non-successful terminal state, current service state: Failed
Operation ID: 48957716-297f-493a-b83a-f006a20c7471
More information can be found using '.get_logs()'
Error:
{
"code": "AciDeploymentFailed",
"statusCode": 400,
"message": "Aci Deployment failed with exception: Error in entry script, ModuleNotFoundError: No module named 'azureml.api'

Package conflict warning message resulting in error

I've been doing several of the workbooks, and often, when I submit an experiment, I have been getting the message:

Expected a StepRun object but received <class 'azureml.core.run.Run'> instead.
This usually indicates a package conflict with one of the dependencies of azureml-core or azureml-pipeline-core.
Please check for package conflicts in your python environment

And today, when I went to complete workbook "10 - Create a Batch Inferencing Service.ipynb", I was running the cell right after the text "When the pipeline has finished running, the resulting predictions will" (there is only one instance of this text in the workbook) when I got this error:

AttributeError Traceback (most recent call last)
in
7 # Get the run for the first step and download its output
8 prediction_run = next(pipeline_run.get_children())
----> 9 prediction_output = prediction_run.get_output_data('inferences')
10 prediction_output.download(local_path='diabetes-results')
11

AttributeError: 'Run' object has no attribute 'get_output_data'

Looking at the docs, it would seem that line 9 is expecting a StepRun object, not a Run object.

diabetes_env.docker.enabled = True is deprected: Solution

Notebook 08-Create a Pipeline.ipynb

We need change the code in: "The compute will require a Python environment with the necessary package dependencies installed, so you'll need to create a run configuration."

The sentence: "diabetes_env.docker.enabled = True # Use a docker container" is deprected.

We need insert in the from:

from azureml.core.runconfig import RunConfiguration, DockerConfiguration

Next, in the Register enviroment, we use the new DockerConfiguration Class

Register the environment
diabetes_env.register(workspace=ws)
registered_env = Environment.get(ws, 'diabetes-pipeline-env')

docker_configuration = DockerConfiguration(use_docker=True) #New class DockerConfiguratio with docker enabled

Create a new runconfig object for the pipeline
pipeline_run_config = RunConfiguration()

pipeline_run_config.docker = docker_configuration # New runconfig variable

Thank you to Leidy M. Mercedes de la Rosa to find and discover the solution.

(Notebook 10) StepRun object not recognised

When I run the cell

import pandas as pd
import shutil

# Remove the local results folder if left over from a previous run
shutil.rmtree('diabetes-results', ignore_errors=True)

# Get the run for the first step and download its output
prediction_run = next(pipeline_run.get_children())
prediction_output = prediction_run.get_output_data('inferences')
prediction_output.download(local_path='diabetes-results')
...

I ran into the following issue:

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
C:\Users\MARTIN~1\AppData\Local\Temp/ipykernel_2100/635567818.py in <module>
      7 # Get the run for the first step and download its output
      8 prediction_run = next(pipeline_run.get_children())
----> 9 prediction_output = prediction_run.get_output_data('inferences')
     10 prediction_output.download(local_path='diabetes-results')
     11 

AttributeError: 'Run' object has no attribute 'get_output_data'

I understand that StepRun inherits the Run class, but shouldn't the get_output_data() method be present anyway?

Any suggetion how to overcome this issue?

conda error -

The following code fails at the last line


from azureml.core import Experiment, ScriptRunConfig, Environment
from azureml.widgets import RunDetails

# Create a Python environment for the experiment (from a .yml file)
env = Environment.from_conda_specification("experiment_env", "environment.yml")

# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
                                script='diabetes_experiment.py',
                                environment=env)

# submit the experiment
experiment = Experiment(workspace=ws, name='mslearn-diabetes')
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()

With the following error
[2022-03-31T14:28:17.918370] Using urllib.request Python 2
Streaming log file azureml-logs/60_control_log.txt Running: ['/bin/bash', '/tmp/azureml_runs/mslearn-diabetes_1648736896_939dbeab/azureml-environment-setup/conda_env_checker.sh']

Starting the daemon thread to refresh tokens in background for process with pid = 20097
Materialized conda environment not found on target: /home/azureuser/.azureml/envs/azureml_809a074975457de1dd27bdfcf2d79d61
[2022-03-31T14:28:18.029888] Logging experiment preparation status in history service.
Running: ['/bin/bash', '/tmp/azureml_runs/mslearn-diabetes_1648736896_939dbeab/azureml-environment-setup/conda_env_builder.sh']
Running: [u'conda', '--version']
[Errno 2] No such file or directory
()
Unable to run conda package manager. AzureML uses conda to provision python
environments from a dependency specification. To manage the python environment
manually instead, set userManagedDependencies to True in the python environment
configuration. To use system managed python environments, install conda from:
https://conda.io/miniconda.html
()
[2022-03-31T14:28:18.635517] Logging error in history service: Failed to run ['/bin/bash', '/tmp/azureml_runs/mslearn-diabetes_1648736896_939dbeab/azureml-environment-setup/conda_env_builder.sh']
Exit code 1
Details can be found in azureml-logs/60_control_log.txt log file.

Uploading control log...

Error occurred: Unable to run conda package manager. AzureML uses conda to provision python
environments from a dependency specification. To manage the python environment
manually instead, set userManagedDependencies to True in the python environment
configuration. To use system managed python environments, install conda from:
https://conda.io/miniconda.html

Using a personal azure account so I don't think it is permissions
Compute instance is standard_D2_V3
Using jupyter notebook
Previous cells run fine.

Exercise is from this:
git clone https://github.com/MicrosoftLearning/mslearn-dp100 mslearn-dp100
Run Experiment section (skipped the others since I already had done those steps on a previous learning module)

Global Importance not in default view

In "Use Automated Machine Learning",
under "Review best model", point 5: Select the Explanations tab, and view the Global Importance chart. This shows the extent to which each feature in the dataset influences the label prediction.

Global Importance isn't in default view. Students have to either toggle on "View previous dashboard experience" or look at the new view so then Global Importance should be replaced by "Aggregate feature importance"

ModuleNotFoundError: No module named 'azureml.api'

I am trying to deploy models with ACI ,

env.python.conda_dependencies.add_pip_package("azureml-defaults")
env.python.conda_dependencies.add_pip_package("azureml-model-management-sdk")
env.python.conda_dependencies.add_pip_package("azure-ml-api-sdk")

I have installed the dependencies ...

Endpoint deployment instructions after the update

https://github.com/MicrosoftLearning/mslearn-dp100/blob/main/instructions/02-automated-ml.md#deploy-a-predictive-service

Deploy a predictive service-> step 2:
Name: auto-predict-diabetes - Endpoint name should be unique to the region and setting up the same name is resulting in BadName error OR just shows no progress to a user and confuses. It also sometimes just retraces users back to step 1 or gives a quota error (if users tried multiple times)

Compute type: ACI - there is no such type in the GUI after the update, please change to Managed
Enable authentication: Selected - Please change to Key

further instructions are also not reflecting the update change, please check

If you have any questions, please reach out
Elena Moor

sn.Analysis throws True has type <class 'numpy.bool_'>, but expected one of: (<class 'bool'>, <class 'numbers.Integral'>)

Hello there,

Thanks for these tutorials. Running the below code in Explore Differential Privacy.ipynb

import matplotlib.pyplot as plt
with sn.Analysis() as analysis:
    data = sn.Dataset(path = data_path, column_names = cols)

    age_histogram = sn.dp_histogram(
            sn.to_int(data['Age'], lower=0, upper=120),
            edges = ages,
            upper = 10000,
            null_value = -1,
            privacy_usage = {'epsilon': 0.5}
        )
    
analysis.release()

throws the following error:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-8-8460cf7d618e> in <module>
     12         )
     13 
---> 14 analysis.release()

~\anaconda3\envs\AzureEnv\lib\site-packages\opendp\smartnoise\core\base.py in release(self)
    799         response_proto: api_pb2.ResponseRelease.Success = core_library.compute_release(
    800             serialize_analysis(self),
--> 801             serialize_release(self.release_values),
    802             self.stack_traces,
    803             serialize_filter_level(self.filter_level))

~\anaconda3\envs\AzureEnv\lib\site-packages\opendp\smartnoise\core\value.py in serialize_release(release_values)
    103 def serialize_release(release_values):
    104     return base_pb2.Release(
--> 105         values={
    106             component_id: serialize_release_node(release_node)
    107             for component_id, release_node in release_values.items()

~\anaconda3\envs\AzureEnv\lib\site-packages\opendp\smartnoise\core\value.py in <dictcomp>(.0)
    104     return base_pb2.Release(
    105         values={
--> 106             component_id: serialize_release_node(release_node)
    107             for component_id, release_node in release_values.items()
    108             if release_node['value'] is not None

~\anaconda3\envs\AzureEnv\lib\site-packages\opendp\smartnoise\core\value.py in serialize_release_node(release_node)
    112 def serialize_release_node(release_node):
    113     return base_pb2.ReleaseNode(
--> 114         value=serialize_value(
    115             release_node['value'],
    116             release_node.get("value_format")),

~\anaconda3\envs\AzureEnv\lib\site-packages\opendp\smartnoise\core\value.py in serialize_value(value, value_format)
    210         array=value_pb2.Array(
    211             shape=list(array.shape),
--> 212             flattened=serialize_array1d(array.flatten())
    213         ))
    214 

~\anaconda3\envs\AzureEnv\lib\site-packages\opendp\smartnoise\core\value.py in serialize_array1d(array)
    152 
    153     return value_pb2.Array1d(**{
--> 154         data_type: container_type(data=list(array))
    155     })
    156 

~\anaconda3\envs\AzureEnv\lib\site-packages\google\protobuf\internal\python_message.py in init(self, **kwargs)
    551             field_value = [_GetIntegerEnumValue(field.enum_type, val)
    552                            for val in field_value]
--> 553           copy.extend(field_value)
    554         self._fields[field] = copy
    555       elif field.cpp_type == _FieldDescriptor.CPPTYPE_MESSAGE:

~\anaconda3\envs\AzureEnv\lib\site-packages\google\protobuf\internal\containers.py in extend(self, elem_seq)
    283       raise
    284 
--> 285     new_values = [self._type_checker.CheckValue(elem) for elem in elem_seq_iter]
    286     if new_values:
    287       self._values.extend(new_values)

~\anaconda3\envs\AzureEnv\lib\site-packages\google\protobuf\internal\containers.py in <listcomp>(.0)
    283       raise
    284 
--> 285     new_values = [self._type_checker.CheckValue(elem) for elem in elem_seq_iter]
    286     if new_values:
    287       self._values.extend(new_values)

~\anaconda3\envs\AzureEnv\lib\site-packages\google\protobuf\internal\type_checkers.py in CheckValue(self, proposed_value)
    135       message = ('%.1024r has type %s, but expected one of: %s' %
    136                  (proposed_value, type(proposed_value), self._acceptable_types))
--> 137       raise TypeError(message)
    138     # Some field types(float, double and bool) accept other types, must
    139     # convert to the correct type in such cases.

TypeError: True has type <class 'numpy.bool_'>, but expected one of: (<class 'bool'>, <class 'numbers.Integral'>)

Could you please advise how to fix?

Thanks

The output of pip freeze

adal==1.2.7
antlr4-python3-runtime==4.8
anyio @ file:///C:/ci/anyio_1620153418380/work/dist
applicationinsights==0.11.10
argon2-cffi @ file:///C:/ci/argon2-cffi_1613037959010/work
async-generator @ file:///home/ktietz/src/ci/async_generator_1611927993394/work
attrs @ file:///tmp/build/80754af9/attrs_1620827162558/work
azure-common==1.1.27
azure-core==1.14.0
azure-graphrbac==0.61.1
azure-identity==1.4.1
azure-mgmt-authorization==0.61.0
azure-mgmt-containerregistry==2.8.0
azure-mgmt-keyvault==2.2.0
azure-mgmt-resource==13.0.0
azure-mgmt-storage==11.2.0
azure-storage-blob==12.8.1
azureml-accel-models==1.29.0
azureml-core==1.29.0
azureml-datadrift==1.29.0
azureml-dataprep==2.15.1
azureml-dataprep-native==33.0.0
azureml-dataprep-rslex==1.13.0
azureml-dataset-runtime==1.29.0
azureml-interpret==1.29.0
azureml-pipeline-core==1.29.0
azureml-telemetry==1.29.0
azureml-widgets==1.29.0
Babel @ file:///tmp/build/80754af9/babel_1620871417480/work
backcall @ file:///home/ktietz/src/ci/backcall_1611930011877/work
backports.tempfile==1.0
backports.weakref==1.0.post1
bleach @ file:///tmp/build/80754af9/bleach_1612211392645/work
brotlipy==0.7.0
certifi==2020.12.5
cffi @ file:///C:/ci/cffi_1613247279197/work
chardet @ file:///C:/ci/chardet_1607690654534/work
cloudpickle==1.6.0
colorama @ file:///tmp/build/80754af9/colorama_1607707115595/work
contextlib2==0.6.0.post1
cryptography @ file:///C:/ci/cryptography_1616769344312/work
cycler==0.10.0
decorator @ file:///tmp/build/80754af9/decorator_1621259047763/work
defusedxml @ file:///tmp/build/80754af9/defusedxml_1615228127516/work
distro==1.5.0
docker==4.4.4
dotnetcore2==2.1.20
entrypoints==0.3
fusepy==3.0.1
greenlet==1.1.0
grpcio==1.38.0
idna @ file:///home/linux1/recipes/ci/idna_1610986105248/work
importlib-metadata @ file:///C:/ci/importlib-metadata_1617877484576/work
interpret-community==0.17.2
interpret-core==0.2.4
ipykernel @ file:///C:/ci/ipykernel_1596190155316/work/dist/ipykernel-5.3.4-py3-none-any.whl
ipython @ file:///C:/ci/ipython_1617121002983/work
ipython-genutils @ file:///tmp/build/80754af9/ipython_genutils_1606773439826/work
ipywidgets==7.6.3
isodate==0.6.0
jedi==0.17.0
jeepney==0.6.0
Jinja2 @ file:///tmp/build/80754af9/jinja2_1621238361758/work
jmespath==0.10.0
joblib==1.0.1
json5==0.9.5
jsonpickle==2.0.0
jsonschema @ file:///tmp/build/80754af9/jsonschema_1602607155483/work
jupyter-client @ file:///tmp/build/80754af9/jupyter_client_1616770841739/work
jupyter-core @ file:///C:/ci/jupyter_core_1612213356021/work
jupyter-packaging @ file:///tmp/build/80754af9/jupyter-packaging_1613502826984/work
jupyter-server @ file:///C:/ci/jupyter_server_1616084298403/work
jupyterlab @ file:///tmp/build/80754af9/jupyterlab_1619133235951/work
jupyterlab-pygments @ file:///tmp/build/80754af9/jupyterlab_pygments_1601490720602/work
jupyterlab-server @ file:///tmp/build/80754af9/jupyterlab_server_1617134334258/work
jupyterlab-widgets==1.0.0
kiwisolver==1.3.1
lightgbm==3.2.1
MarkupSafe @ file:///C:/ci/markupsafe_1621528314575/work
matplotlib==3.2.1
mistune==0.8.4
msal==1.12.0
msal-extensions==0.2.2
msrest==0.6.21
msrestazure==0.6.4
nbclassic @ file:///tmp/build/80754af9/nbclassic_1616085367084/work
nbclient @ file:///tmp/build/80754af9/nbclient_1614364831625/work
nbconvert @ file:///C:/ci/nbconvert_1601914925608/work
nbformat @ file:///tmp/build/80754af9/nbformat_1617383369282/work
ndg-httpsclient==0.5.1
nest-asyncio @ file:///tmp/build/80754af9/nest-asyncio_1613680548246/work
notebook @ file:///C:/ci/notebook_1621528634641/work
numpy==1.19.3
oauthlib==3.1.0
opendp-smartnoise==0.1.3.1
opendp-smartnoise-core==0.2.2
packaging @ file:///tmp/build/80754af9/packaging_1611952188834/work
pandas==1.2.4
pandasql==0.7.3
pandocfilters @ file:///C:/ci/pandocfilters_1605102497129/work
parso @ file:///tmp/build/80754af9/parso_1617223946239/work
pathspec==0.8.1
patsy==0.5.1
pickleshare @ file:///tmp/build/80754af9/pickleshare_1606932040724/work
portalocker==1.7.1
prometheus-client @ file:///tmp/build/80754af9/prometheus_client_1618088486455/work
prompt-toolkit @ file:///tmp/build/80754af9/prompt-toolkit_1616415428029/work
protobuf==3.17.1
py4j==0.10.9
pyarrow==3.0.0
pyasn1==0.4.8
pycparser @ file:///tmp/build/80754af9/pycparser_1594388511720/work
Pygments @ file:///tmp/build/80754af9/pygments_1621606182707/work
PyJWT==2.1.0
pyOpenSSL @ file:///tmp/build/80754af9/pyopenssl_1608057966937/work
pyparsing @ file:///home/linux1/recipes/ci/pyparsing_1610983426697/work
pyrsistent @ file:///C:/ci/pyrsistent_1600141795814/work
PySocks @ file:///C:/ci/pysocks_1605287845585/work
pyspark==3.1.1
python-dateutil @ file:///home/ktietz/src/ci/python-dateutil_1611928101742/work
pytz @ file:///tmp/build/80754af9/pytz_1612215392582/work
pywin32==227
pywinpty==0.5.7
PyYAML==5.4.1
pyzmq==20.0.0
requests @ file:///tmp/build/80754af9/requests_1608241421344/work
requests-oauthlib==1.3.0
ruamel.yaml==0.17.4
ruamel.yaml.clib==0.2.2
scikit-learn==0.24.2
scipy==1.6.3
SecretStorage==3.3.1
Send2Trash @ file:///tmp/build/80754af9/send2trash_1607525499227/work
shap==0.34.0
six @ file:///C:/ci/six_1605187374963/work
sniffio @ file:///C:/ci/sniffio_1614030707456/work
SQLAlchemy==1.4.15
statsmodels==0.12.2
terminado==0.9.4
testpath @ file:///home/ktietz/src/ci/testpath_1611930608132/work
threadpoolctl==2.1.0
tornado @ file:///C:/ci/tornado_1606942392901/work
tqdm==4.60.0
traitlets @ file:///home/ktietz/src/ci/traitlets_1611929699868/work
urllib3 @ file:///tmp/build/80754af9/urllib3_1615837158687/work
wcwidth @ file:///tmp/build/80754af9/wcwidth_1593447189090/work
webencodings==0.5.1
websocket-client==1.0.1
widgetsnbextension==3.5.1
win-inet-pton @ file:///C:/ci/win_inet_pton_1605306167264/work
wincertstore==0.2
zipp @ file:///tmp/build/80754af9/zipp_1615904174917/work
Note: you may need to restart the kernel to use updated packages.

04 Run Experiments Still Doesn't Complete.

WARNING:root:The conda environment is currently locked by another AzureML job. Further job submission will wait until the other process finishes. If there are no other jobs running, please delete /home/azureuser/.azureml/locks/azureml_conda_lock

LOG :
Waiting for other Conda operations to finish...
Delete /home/azureuser/.azureml/locks/azureml_conda_lock to override.

Status :
Preparing

4 Experiment

Attribute error when running DP-100 notebook labs

Hi Team,

This may be more of an issue on my side but i am unable to get the notebooks working.
When I run the mslearn lab 4 and 5 i get the following error.
AttributeError: 'MSIAuthentication' object has no attribute 'get_token'

Apologies if ive missed something.

Thanks

Failure when testing deployed web service

Lab 02 - Use Automated Machine Learning

When testing the deployed web service as per written instructions under Test the deployed service, a 502 error is returned. All steps performed as written, the model itself works properly. Fails both using an Azure trial pass and a MSDN pay-as-you-go subscription in US East, US Central and US West

image

Issue with instructions/03-azureml-designer.md

The following chunk on instructions/03-azureml-designer.md, when copied to the lab, are pasted with the wrong indentation and some extra closing brackets after the return line.

import pandas as pd

def azureml_main(dataframe1 = None, dataframe2 = None):

    scored_results = dataframe1[['PatientID', 'Scored Labels', 'Scored Probabilities']]
    scored_results.rename(columns={'Scored Labels':'DiabetesPrediction',
                                    'Scored Probabilities':'Probability'},
                            inplace=True)
    return scored_results

Lab 13 Explore Differential Privacy

Hello:

Since we are not being able to run the pip install commands and add the required tools, example:

pip install opendp-smartnoise==0.1.4.2

We get a message of pip command not found.

Next cell after setting up adata fails in order to execute since package is missing:

import opendp.smartnoise.core as sn

cols = list(diabetes.columns)
age_range = [0.0, 120.0]
samples = len(diabetes)

with sn.Analysis() as analysis:
# load data
data = sn.Dataset(path=data_path, column_names=cols)

# Convert Age to float
age_dt = sn.to_float(data['Age'])

# get mean of age
age_mean = sn.dp_mean(data = age_dt,
                      privacy_usage = {'epsilon': .50},
                      data_lower = age_range[0],
                      data_upper = age_range[1],
                      data_rows = samples
                     )

analysis.release()

print differentially private estimate of mean age

print("Private mean age:",age_mean.value)

print actual mean age

print("Actual mean age:",diabetes.Age.mean())

"Create a real-time inferencing service" entry script doesn't recognise azureml.api

Hello! I am working my way through this azureML course - these notebooks are very helpful, thank you!

In 09-Create a real-time inferencing service I am finding an error though I cannot find a solution to. The model cannot seem to deploy. I have deleted it and tried again, tried changing python versions, and adding more imports to the entry script.

To be precise, when I run the cell that contains:

Error in entry script, ModuleNotFoundError: No module named 'azureml.api', please run print(service.get_logs()) to get details.

the model fails to deploy, giving me the error:

Error:
{
  "code": "AciDeploymentFailed",
  "statusCode": 400,
  "message": "Aci Deployment failed with exception: Error in entry script, ModuleNotFoundError: No module named 'azureml.api', please run print(service.get_logs()) to get details.",
  "details": [
    {
      "code": "CrashLoopBackOff",
      "message": "Error in entry script, ModuleNotFoundError: No module named 'azureml.api', please run print(service.get_logs()) to get details."
    }
  ]
}

---------------------------------------------------------------------------
WebserviceException                       Traceback (most recent call last)
<ipython-input-9-ca23bdb13d88> in <module>
     13 service = Model.deploy(ws, service_name, [model], inference_config, deployment_config)
     14 
---> 15 service.wait_for_deployment(True)
     16 print(service.state)

/anaconda/envs/azureml_py36/lib/python3.6/site-packages/azureml/core/webservice/webservice.py in wait_for_deployment(self, show_output, timeout_sec)
    923                                           'Error:\n'
    924                                           '{}'.format(self.state, self._operation_endpoint.split('/')[-1],
--> 925                                                       logs_response, format_error_response), logger=module_logger)
    926             print('{} service creation operation finished, operation "{}"'.format(self._webservice_type,
    927                                                                                   operation_state))

WebserviceException: WebserviceException:
	Message: Service deployment polling reached non-successful terminal state, current service state: Failed
Operation ID: 8243c85c-0232-4b65-8c23-710ae73d1a63
More information can be found using '.get_logs()'
Error:
{
  "code": "AciDeploymentFailed",
  "statusCode": 400,
  "message": "Aci Deployment failed with exception: Error in entry script, ModuleNotFoundError: No module named 'azureml.api', please run print(service.get_logs()) to get details.",
  "details": [
    {
      "code": "CrashLoopBackOff",
      "message": "Error in entry script, ModuleNotFoundError: No module named 'azureml.api', please run print(service.get_logs()) to get details."
    }
  ]
}
	InnerException None
	ErrorResponse 
{
    "error": {
        "message": "Service deployment polling reached non-successful terminal state, current service state: Failed\nOperation ID: 8243c85c-0232-4b65-8c23-710ae73d1a63\nMore information can be found using '.get_logs()'\nError:\n{\n  \"code\": \"AciDeploymentFailed\",\n  \"statusCode\": 400,\n  \"message\": \"Aci Deployment failed with exception: Error in entry script, ModuleNotFoundError: No module named 'azureml.api', please run print(service.get_logs()) to get details.\",\n  \"details\": [\n    {\n      \"code\": \"CrashLoopBackOff\",\n      \"message\": \"Error in entry script, ModuleNotFoundError: No module named 'azureml.api', please run print(service.get_logs()) to get details.\"\n    }\n  ]\n}"
    }
}

I have tried this in the browser instance of jupyter notebooks provided in AzureML, and also on my local version of Visual Studio Code using the AzureML plugin.

Can I please get some advice on how to proceed?

Thanks

Application Insights requests are failing from notebooks

The last couple of classes I have delivered, the app insights requests are going through to the service, but are failing at the server level due to "missing swagger" error. Not sure what this means, but included a screenshot of what I am seeing. There are no outputs to review due to the request fails.

image

* please remove *

Lab 02 - Use Automated Machine Learning
When testing the deployed web service as per instructions, a 502 error is returned, Testing the model itself is successfu. All steps performed as written. Tried with both an Azure trial pass and with a regular pay-as-you-go subscription.

Change in PipelineData object to OutputFileDatasetConfig

I am instructor for this course and noticed the change from the PipelineData object. Since this specific object is identified in the slidedeck:

Will we need to update the slidedeck to reference the new object or will will the exam still reference the PipelineData object?

If it still references the PipelineData object, why was this change made?

Thanks, this information will help me with future instruction.

Outdated answer on "Module 6 - Review Questions"

Hello,

I am taking the ESI Course "DP-100 Designing and Implementing a Data Science Solution on Azure" and on the LabOnDeman module 6 questions, there is an outdated answer (thanks to Oleksiy Nazarenko for spotting it) :

image

The OutputFileDatasetConfig object is a special kind of data reference that is used for interim storage locations that can be passed between pipeline steps, so you'll create one and use at as the output for the first step and the input for the second step.

Thank you

Suggestion - Add - Install the Azure Machine Learning SDK

Add:
Install the Azure Machine Learning SDK
The Azure Machine Learning SDK is updated frequently. Run the following cell to upgrade to the latest release, along with the additional package to support notebook widgets.

In [ ]: !pip install --upgrade azureml-sdk azureml-widgets

To: https://github.com/MicrosoftLearning/mslearn-dp100/blob/main/04%20-%20Run%20Experiments.ipynb

Reason:

  • 02-Experiments is the first step-by-step exercise in this module.
  • Most of the subsequent exercises has this step.

Error in 06-Work With Data

I am getting a deprecation error in the notebook "06-Work With Data":

default_ds.upload_files(files=['./data/diabetes.csv', './data/diabetes2.csv'], # Upload the diabetes csv files in /data
                       target_path='diabetes-data/', # Put it in a folder path in the datastore
                       overwrite=True, # Replace existing files of the same name
                       show_progress=True)

"datastore.upload_files" is deprecated after version 1.0.69. Please use "FileDatasetFactory.upload_directory" instead. See Dataset API change notice at https://aka.ms/dataset-deprecation.

Code produces errors

The workspace is not imported with the following error:

TypeError: _get_ambient_new() takes 1 positional argument but 2 were given.

It is not possible to call Workspace class and save to variable ws, MSLearning Path module 01.

Using the following code also fails to load the workspace.

from azureml.core import Workspace

subscription_id = ""
resource_group = ""
workspace_name = ""

ws = Workspace(subscription_id = subscription_id, resource_group = resource_group, workspace_name = workspace_name)
ws.write_config()

04 Run Experiments Doesn't Complete.

I have run 04 - Run Experiments.ipynb multiple times now in many new AMLS's and have never managed to get past cell 9 "The following cell configures and submits the script-based experiment.". The run hangs in "Preparing" with continuous repeated warnings around "Warning: you have pip-installed dependencies in your environment file, but you do not list pip itself as one of your conda dependencies. Conda may not use the correct pip to install your packages, and they may end up in the wrong place. Please add an explicit pip dependency. I'm adding one for you, but still nagging you.". The https://.blob.core.windows.net/azureml/ExperimentRun/dcid.mslearn-diabetes_1620313789_c6321d70/azureml-logs/60_control_log.txt file reaches over 300MB.

Create a batch inferencing module error due to undersized compute configuration

I am getting the following error when stepping through the scripts for the batch inferencing module:

"Job submission to AzureML Compute encountered an Exception with status code JobNodeCountExceedsCoreQuota, The specified subscription has a Standard DSv2 family vCPU quota of 2 and is less than the requested job node count of 2 which maps to 4 vCPUs. Talk to your Subscription Admin or refer to https://docs.microsoft.com/azure/machine-learning/how-to-manage-quotas#request-quota-increases to increase the Standard DSv2 family vCPU quota\"

Not however that the script provided includes the following:

try:
    # Check for existing compute target
    inference_cluster = ComputeTarget(workspace=ws, name=cluster_name)
    print('Found existing cluster, use it.')
except ComputeTargetException:
    # If it doesn't already exist, create it
    try:
        compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_DS11_V2', max_nodes=2)
        inference_cluster = ComputeTarget.create(ws, cluster_name, compute_config)
        inference_cluster.wait_for_completion(show_output=True)
    except Exception as ex:
        print(ex)

Suggesting that the above compute configuration is not enough.

RuntimeError on Working With Data (6)

Getting this error when running the notebook called "06 - Work with Data.ipynb".

The error happens when running the block of code below


from azureml.core import Experiment, ScriptRunConfig, Environment
from azureml.widgets import RunDetails


# Create a Python environment for the experiment (from a .yml file)
env = Environment.from_conda_specification("experiment_env", "environment.yml")

# Get the training dataset
diabetes_ds = ws.datasets.get("diabetes dataset")

# Create a script config
script_config = ScriptRunConfig(source_directory=experiment_folder,
                              script='diabetes_training.py',
                              arguments = ['--regularization', 0.1, # Regularizaton rate parameter
                                           '--input-data', diabetes_ds.as_named_input('training_data')], # Reference to dataset
                              environment=env) 

# submit the experiment
experiment_name = 'mslearn-train-diabetes'
experiment = Experiment(workspace=ws, name=experiment_name)
run = experiment.submit(config=script_config)
RunDetails(run).show()
run.wait_for_completion()

The Stacktrace from logs below:

Traceback (most recent call last):
  File "diabetes_training.py", line 27, in <module>
    diabetes = run.input_datasets['training_data'].to_pandas_dataframe()
  File "/home/azureuser/.azureml/envs/azureml_809a074975457de1dd27bdfcf2d79d61/lib/python3.6/site-packages/azureml/data/_loggerfactory.py", line 132, in wrapper
    return func(*args, **kwargs)
  File "/home/azureuser/.azureml/envs/azureml_809a074975457de1dd27bdfcf2d79d61/lib/python3.6/site-packages/azureml/data/tabular_dataset.py", line 168, in to_pandas_dataframe
    dataflow = get_dataflow_for_execution(self._dataflow, 'to_pandas_dataframe', 'TabularDataset')
  File "/home/azureuser/.azureml/envs/azureml_809a074975457de1dd27bdfcf2d79d61/lib/python3.6/site-packages/azureml/data/_loggerfactory.py", line 132, in wrapper
    return func(*args, **kwargs)
  File "/home/azureuser/.azureml/envs/azureml_809a074975457de1dd27bdfcf2d79d61/lib/python3.6/site-packages/azureml/data/abstract_dataset.py", line 217, in _dataflow
    dataprep().api._datastore_helper._set_auth_type(self._registration.workspace)
  File "/home/azureuser/.azureml/envs/azureml_809a074975457de1dd27bdfcf2d79d61/lib/python3.6/site-packages/azureml/dataprep/api/_datastore_helper.py", line 185, in _set_auth_type
    get_engine_api().set_aml_auth(SetAmlAuthMessageArgument(auth_type, json.dumps(auth_value)))
  File "/home/azureuser/.azureml/envs/azureml_809a074975457de1dd27bdfcf2d79d61/lib/python3.6/site-packages/azureml/dataprep/api/engineapi/api.py", line 19, in get_engine_api
    _engine_api = EngineAPI()
  File "/home/azureuser/.azureml/envs/azureml_809a074975457de1dd27bdfcf2d79d61/lib/python3.6/site-packages/azureml/dataprep/api/engineapi/api.py", line 110, in __init__
    self._message_channel = launch_engine()
  File "/home/azureuser/.azureml/envs/azureml_809a074975457de1dd27bdfcf2d79d61/lib/python3.6/site-packages/azureml/dataprep/api/engineapi/engine.py", line 333, in launch_engine
    dependencies_path = runtime.ensure_dependencies()
  File "/home/azureuser/.azureml/envs/azureml_809a074975457de1dd27bdfcf2d79d61/lib/python3.6/site-packages/dotnetcore2/runtime.py", line 289, in ensure_dependencies
    raise RuntimeError(err_msg)
RuntimeError: Unable to retrieve .NET dependencies. Please make sure you are connected to the Internet and have a stable network connection.

It seems like the dotnetcore2 package is trying to download dependencies but failing. This may be due to a corporate firewall. If this is this case, it possible to install these manually?

Error Loading Notebooks

When I try to load a lab notebook in VS Code, the page is blank. When I try to load the same lab notebook in Jupyter Notebooks via Anaconda, I get an error message:

Error Loading Notebook
"Unreadable Notebook: C:\Users\pusongan\Azure-Training\mslearn-dp100\08 - Create a Pipeline.ipynb NotJSONError("Notebook does not appear to be JSON: '\n\n\n\n\n\n\n<html lang...")"

Screenshot 2021-01-19 090630

Screenshot 2021-01-19 090826

AutoML Lab vague sentence

In the AutoML lab there is the following sentence:

Blocked algorithms: Leave all algorithms selected

This is confusing since the selected algorithms are actually blocked.

Please rephrase.

-Tycho

Proposal and question

  1. Proposal for labs 04a / 04b to mention in markdown that links should be adjusted for windows and have this format: .\data\diabetes2.csv (code has forward slash: ./data/diabetes2.csv that causes an error in Windows, I ran under Visual Studio Code)
  2. question for lab 03A get stuck in Starting status (Your job is submitted in Azure cloud and we are monitoring to get logs...), not clear why that happens and how to troubleshoot. Any advice? Tried the same with other script in 04B and got the same problem.

14 - Interpret Models

I just tested this notebook with the Runtime: Python 3.8 - Azure ML
Cell No. 10

Error occurred: Unable to run conda package manager. AzureML uses conda to provision python environments from a dependency specification.

[2022-04-08T14:13:20.646586] Using urllib.request Python 2
Streaming log file azureml-logs/60_control_log.txt
Starting the daemon thread to refresh tokens in background for process with pid = 4641
Running: ['/bin/bash', '/tmp/azureml_runs/mslearn-diabetes-explain_1649427199_4886c13b/azureml-environment-setup/conda_env_checker.sh']
Materialized conda environment not found on target: /home/azureuser/.azureml/envs/azureml_4612c5564caffc186e5bd2016e43147c
[2022-04-08T14:13:20.755812] Logging experiment preparation status in history service.
Running: ['/bin/bash', '/tmp/azureml_runs/mslearn-diabetes-explain_1649427199_4886c13b/azureml-environment-setup/conda_env_builder.sh']
Running: [u'conda', '--version']
[Errno 2] No such file or directory
()
Unable to run conda package manager. AzureML uses conda to provision python
environments from a dependency specification. To manage the python environment
manually instead, set userManagedDependencies to True in the python environment
configuration. To use system managed python environments, install conda from:
https://conda.io/miniconda.html
()
[2022-04-08T14:13:22.165785] Logging error in history service: Failed to run ['/bin/bash', '/tmp/azureml_runs/mslearn-diabetes-explain_1649427199_4886c13b/azureml-environment-setup/conda_env_builder.sh']
Exit code 1
Details can be found in azureml-logs/60_control_log.txt log file.

Uploading control log...

Error occurred: Unable to run conda package manager. AzureML uses conda to provision python
environments from a dependency specification. To manage the python environment
manually instead, set userManagedDependencies to True in the python environment
configuration. To use system managed python environments, install conda from:
https://conda.io/miniconda.html

Thank you

Attribute error when running mslearn-dp100 labs

Hi Team

I'm still getting attribute errors when running the labs.
In all the labs this occurs whenever i want to connect to my workspace, preventing me from progressing through the rest of the notebook.

Error:
"AttributeError: 'MSIAuthentication' object has no attribute 'get_token'"

image

Im unsure as to why this is happening, do i require an update or is there something i have missed

Thanks

09 - Create a Real-time Inferencing Service.ipynb does not mention auth_enabled/ssl_enabled

The notebook contains the following section

You've deployed your web service as an Azure Container Instance (ACI) service that
requires no authentication. This is fine for development and testing, but for
production you should consider deploying to an Azure Kubernetes Service (AKS) cluster
and enabling token-based authentication. 

Setting auth_enabled=True would allow ACI to use authentication. However the issue is that ACI does not support a managed SSL certificate from Microsoft, which is the main limitation of ACI wrt securing authentication.

Move instructions into notebooks

Hi @GraemeMalcolm,

Could we move the instruction markdown information into the Jupyter notebooks?

I know the first lab creates the compute instance which is where Jupyter is launched from. This information could be on the Readme page as a first step or course setup. Once the students finish that part of the labs, they could then focus on the notebooks and not have to jump between two sources of information.

The instructor is always there to help in the mornings to get the students to start the instance.

Issue with Designer Inference Pipeline instructions

Hi,

I followed the instruction in 03-azureml-designer.md and was able to create and run the training pipeline fine. I then deployed the inference pipeline. However the run of the inference pipeline failed with the below error:-
"AzureMLCompute job failed.
JobFailed: Submitted script failed with a non-zero exit code; see the driver log file for details.
Reason: Job failed with non-zero exit Code"

Seems something is not right in the instructions. The normalization step did not include Diabetic column. So not sure why the transformation step is looking into this columns. The training dataset was removed and replace with manual input as specified in the instructions.
Below is the error log
File "/azureml-envs/azureml_27ff1befbcbf963c2543a3994cfbad97/lib/python3.6/site-packages/azureml/studio/modulehost/module_reflector.py", line 76, in wrapper
ret = func(*args, **validated_args)
File "/azureml-envs/azureml_27ff1befbcbf963c2543a3994cfbad97/lib/python3.6/site-packages/azureml/studio/modules/ml/score/apply_transformation/apply_transformation.py", line 51, in run
return transform.apply(data),
File "/azureml-envs/azureml_27ff1befbcbf963c2543a3994cfbad97/lib/python3.6/site-packages/azureml/studio/core/logger.py", line 209, in wrapper
ret = func(*args, **kwargs)
File "/azureml-envs/azureml_27ff1befbcbf963c2543a3994cfbad97/lib/python3.6/site-packages/azureml/studio/modules/datatransform/scale_and_reduce/normalize_data/nomalize_transformer.py", line 68, in apply
ErrorMapping.throw(ColumnNotFoundError(column_name))
File "/azureml-envs/azureml_27ff1befbcbf963c2543a3994cfbad97/lib/python3.6/site-packages/azureml/studio/common/error.py", line 821, in throw
raise err
azureml.studio.common.error.ColumnNotFoundError: Column with name or index "Diabetic" not found.

Issue on Module16

Module : 16
Cell : 05
issue: After running the script to download the score_diabetes.py in Cell 4, facing issue saying that score_diabetes.py doesn't exist in diabetes_service folder.

Screenshot (14)

How to resolve:

  • Replace the Script_file with Script_path. (score_diabetes.py will be available in the diabetes_service) then it will never show the message saying that doesn't exist and will start Deploying model.

Screenshot (15)

please make a change in the repo @GraemeMalcolm @Resseguie

Error on Workspace.get_from_config()

There's a known issue with SDK version 1.21.0 (released Jan 25th 2021) that causes an error when calling Workspace.get_from_config() on a compute instance. For the time being, don't upgrade (just use SDK 1.19.0, which is installed on Compute Instances by default)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.