< Return to Video

Create a Classification Model with Azure Machine Learning Designer [English]

  • 0:02 - 0:05
    great so I think we can start since the
  • 0:05 - 0:08
    meeting is recorded So if everyone uh
  • 0:08 - 0:11
    jump jumps in later can can watch the
  • 0:11 - 0:12
    recording
  • 0:12 - 0:16
    so hi everyone and welcome to these
  • 0:16 - 0:18
    um Cloud skill challenge study session
  • 0:18 - 0:21
    around a create classification models
  • 0:21 - 0:24
    with Azure machine learning designer
  • 0:24 - 0:27
    so today I'm thrilled to be here with
  • 0:27 - 0:30
    John uh John do my introduce briefly
  • 0:30 - 0:32
    yourself
  • 0:32 - 0:34
    uh thank you Carlotta hello everyone
  • 0:34 - 0:38
    Welcome to our Workshop today I hope
  • 0:38 - 0:41
    that you are all excited for it I am
  • 0:41 - 0:43
    John Aziz a gold Microsoft learn student
  • 0:43 - 0:47
    ambassador and I will be here with uh
  • 0:47 - 0:51
    Carlota to like do the Practical part
  • 0:51 - 0:54
    about this module of the cloud skills
  • 0:54 - 0:57
    challenge thank you for having me
  • 0:57 - 0:59
    perfect thanks John so for those who
  • 0:59 - 1:03
    don't know me I'm kellota
  • 1:03 - 1:06
    based in Italy and focus it on AI
  • 1:06 - 1:09
    machine learning Technologies and about
  • 1:09 - 1:12
    the use in education
  • 1:12 - 1:13
    um so
  • 1:13 - 1:15
    um these Cloud skill challenge study
  • 1:15 - 1:18
    session is based on a learn module a
  • 1:18 - 1:22
    dedicated learn module I sent to you uh
  • 1:22 - 1:24
    the link to this module uh in the chat
  • 1:24 - 1:26
    in a way that you can follow along the
  • 1:26 - 1:29
    model if you want or just have a look at
  • 1:29 - 1:33
    the module later at your own pace
  • 1:33 - 1:34
    um
  • 1:34 - 1:37
    so before starting I would also like to
  • 1:37 - 1:41
    remember to remember you uh the code of
  • 1:41 - 1:43
    conduct and guidelines of our student
  • 1:43 - 1:48
    Masters community so please during this
  • 1:48 - 1:51
    meeting be respectful and inclusive and
  • 1:51 - 1:54
    be friendly open and with coming and
  • 1:54 - 1:56
    respectful of other each other
  • 1:56 - 1:58
    differences
  • 1:58 - 2:01
    if you want to learn more about the code
  • 2:01 - 2:04
    of content you can use this link in the
  • 2:04 - 2:09
    deck again.ms slash s-a-c-o-c
  • 2:10 - 2:12
    and now we are
  • 2:12 - 2:15
    um we are ready to to start our session
  • 2:15 - 2:19
    so as we mentioned it we are going to
  • 2:19 - 2:22
    focus on classification models and Azure
  • 2:22 - 2:25
    ml uh today so first of all we are going
  • 2:25 - 2:29
    to um identify uh the kind of
  • 2:29 - 2:31
    um of scenarios in which you should
  • 2:31 - 2:34
    choose to use a classification model
  • 2:34 - 2:37
    we're going to introduce Azure machine
  • 2:37 - 2:39
    learning and Azure machine designer
  • 2:39 - 2:42
    we're going to understand uh which are
  • 2:42 - 2:44
    the steps to follow to create a
  • 2:44 - 2:46
    classification model in Azure mesh
  • 2:46 - 2:49
    learning and then John will
  • 2:49 - 2:50
    um
  • 2:50 - 2:52
    lead an amazing demo about training and
  • 2:52 - 2:54
    Publishing a classification model in
  • 2:54 - 2:57
    Azure ml designer
  • 2:57 - 3:00
    so let's start from the beginning let's
  • 3:00 - 3:03
    start from identifying classification
  • 3:03 - 3:05
    machine learning scenarios
  • 3:05 - 3:08
    so first of all what is classification
  • 3:08 - 3:10
    classification is a form of machine
  • 3:10 - 3:12
    learning that is used to predict which
  • 3:12 - 3:16
    category or class an item belongs to for
  • 3:16 - 3:17
    example we might want to develop a
  • 3:17 - 3:20
    classifier able to identify if an
  • 3:20 - 3:22
    Incoming Email should be filtered or not
  • 3:22 - 3:25
    according to the style the center the
  • 3:25 - 3:28
    length of the email Etc in this case the
  • 3:28 - 3:30
    characteristics of the email are the
  • 3:30 - 3:31
    features
  • 3:31 - 3:34
    and the label is a classification of
  • 3:34 - 3:38
    either a zero or one representing a Spam
  • 3:38 - 3:41
    or non-spam for the including email so
  • 3:41 - 3:42
    this is an example of a binary
  • 3:42 - 3:44
    classifier if you want to assign
  • 3:44 - 3:46
    multiple categories to the incoming
  • 3:46 - 3:49
    email like work letters love letters
  • 3:49 - 3:52
    complaints or other categories in this
  • 3:52 - 3:54
    case a binary classifier is not longer
  • 3:54 - 3:56
    enough and we should develop a
  • 3:56 - 3:58
    multi-class classifier so classification
  • 3:58 - 4:01
    is an example of what is called
  • 4:01 - 4:03
    supervised machine learning
  • 4:03 - 4:05
    in which you train a model using data
  • 4:05 - 4:07
    that includes both the features and
  • 4:07 - 4:09
    known values for label
  • 4:09 - 4:11
    so that the model learns to fit the
  • 4:11 - 4:14
    feature combinations to the label then
  • 4:14 - 4:15
    after training has been completed you
  • 4:15 - 4:17
    can use the train model to predict
  • 4:17 - 4:20
    labels for new items for for which the
  • 4:20 - 4:22
    label is unknown
  • 4:22 - 4:25
    but let's see some examples of scenarios
  • 4:25 - 4:27
    for classification machine learning
  • 4:27 - 4:29
    models so we already mentioned an
  • 4:29 - 4:31
    example of a solution in which we would
  • 4:31 - 4:34
    need a classifier but let's explore
  • 4:34 - 4:36
    other scenarios for classification in
  • 4:36 - 4:38
    other Industries for example you can use
  • 4:38 - 4:40
    a classification model for a health
  • 4:40 - 4:44
    clinic scenario and use clinical data to
  • 4:44 - 4:46
    predict whether patient will become sick
  • 4:46 - 4:47
    or not
  • 4:47 - 4:50
    uh you can use
  • 4:50 - 4:52
    um
  • 5:04 - 5:08
    oh sorry so when I became muted it's a
  • 5:08 - 5:12
    long time or you can use you can use uh
  • 5:12 - 5:14
    some models for classification for
  • 5:14 - 5:17
    example you can use you're saying this
  • 5:17 - 5:21
    uh so I I was I was
  • 5:22 - 5:24
    this one like you you have been muted
  • 5:24 - 5:27
    for uh one second okay okay perfect
  • 5:27 - 5:30
    perfect uh yeah I was talking sorry for
  • 5:30 - 5:35
    that so I was talking about the possible
  • 5:35 - 5:37
    you can use a classification model like
  • 5:37 - 5:40
    have Clinic scenario Financial scenario
  • 5:40 - 5:42
    or other third one is business type of
  • 5:42 - 5:44
    scenario you can use characteristics or
  • 5:44 - 5:46
    small business to predict if a new
  • 5:46 - 5:48
    Venture will will succeed or not for
  • 5:48 - 5:50
    example and these are all types of
  • 5:50 - 5:52
    binary classification
  • 5:52 - 5:55
    uh but today we are also going to talk
  • 5:55 - 5:57
    about Azure machine learning so let's
  • 5:57 - 5:58
    see
  • 5:58 - 6:01
    um what is azure Mash learning so
  • 6:01 - 6:02
    training and deploying an effective
  • 6:02 - 6:04
    machine learning model involves a lot of
  • 6:04 - 6:07
    work much of it time consuming and
  • 6:07 - 6:09
    resource intensive so Azure machine
  • 6:09 - 6:11
    learning is a cloud-based service that
  • 6:11 - 6:13
    helps simplify some of the tasks it
  • 6:13 - 6:16
    takes to prepare data train a model and
  • 6:16 - 6:18
    also deploy it as a predictive service
  • 6:18 - 6:20
    so it helps that the scientists increase
  • 6:20 - 6:22
    their efficiency by automating many of
  • 6:22 - 6:25
    the time consuming tasks Associated to
  • 6:25 - 6:28
    creating and training a model
  • 6:28 - 6:30
    and it enables them also to use
  • 6:30 - 6:32
    cloud-based compute resources that scale
  • 6:32 - 6:34
    effectively to handle large volumes of
  • 6:34 - 6:36
    data while incurring costs only when
  • 6:36 - 6:39
    actually used
  • 6:39 - 6:41
    to use Azure machine learning you
  • 6:41 - 6:43
    fasting fast you need to create a
  • 6:43 - 6:45
    workspace resource in your Azure
  • 6:45 - 6:48
    subscription and you can then use these
  • 6:48 - 6:50
    workspace to manage data compute
  • 6:50 - 6:52
    resources code models and other
  • 6:52 - 6:55
    artifacts after you have created an
  • 6:55 - 6:57
    Azure machine learning workspace you can
  • 6:57 - 6:59
    develop Solutions with the Azure machine
  • 6:59 - 7:01
    learning service either with developer
  • 7:01 - 7:03
    tools or the Azure machine Learning
  • 7:03 - 7:04
    Studio web portal
  • 7:04 - 7:06
    in particular International Learning
  • 7:06 - 7:08
    Studio is a web portal for machine
  • 7:08 - 7:10
    learning Solutions in Azure and it
  • 7:10 - 7:12
    includes a wide range of features and
  • 7:12 - 7:14
    capabilities that help data scientists
  • 7:14 - 7:16
    prepare data train models publish
  • 7:16 - 7:18
    Predictive Services and monitor also
  • 7:18 - 7:20
    their usage
  • 7:20 - 7:22
    so to begin using the web portal you
  • 7:22 - 7:24
    need to assign the workspace you created
  • 7:24 - 7:27
    in the Azure portal to the Azure machine
  • 7:27 - 7:29
    Learning Studio
  • 7:30 - 7:32
    as its core Azure Mash learning is a
  • 7:32 - 7:34
    service for training and managing
  • 7:34 - 7:36
    machine learning models for which you
  • 7:36 - 7:38
    need compute resources on which to run
  • 7:38 - 7:40
    the training process
  • 7:40 - 7:44
    compute targets are um one of the main
  • 7:44 - 7:47
    basic concept of azure Mash learning
  • 7:47 - 7:49
    they are cloud-based resources on which
  • 7:49 - 7:51
    you can run model training and AD
  • 7:51 - 7:53
    exploration processes
  • 7:53 - 7:55
    so initial machine Learning Studio you
  • 7:55 - 7:57
    can manage the compute targets for your
  • 7:57 - 7:59
    data science activities and there are
  • 7:59 - 8:03
    four kinds of of compute targets you can
  • 8:03 - 8:06
    create we have the computer instances
  • 8:06 - 8:10
    which are vital machine set up for
  • 8:10 - 8:11
    running machine learning code during
  • 8:11 - 8:13
    development so they are not designed for
  • 8:13 - 8:14
    production
  • 8:14 - 8:17
    then we have compute clusters which are
  • 8:17 - 8:20
    a set of virtual machines that can scale
  • 8:20 - 8:22
    up automatically based on traffic
  • 8:22 - 8:25
    we have inference clusters which are
  • 8:25 - 8:27
    similar to compute clusters but they are
  • 8:27 - 8:29
    designed for deployment so they are a
  • 8:29 - 8:32
    deployment targets for Predictive
  • 8:32 - 8:36
    Services that use train models
  • 8:36 - 8:38
    and finally we have attached compute
  • 8:38 - 8:41
    which are any compute Target that you
  • 8:41 - 8:44
    manage yourself outside of azramel like
  • 8:44 - 8:47
    for example virtual machines or Azure
  • 8:47 - 8:50
    databricks clusters
  • 8:50 - 8:53
    so we talked about Azure machine
  • 8:53 - 8:54
    learning but we also mentioned it
  • 8:54 - 8:56
    mentioned Azure machine learning
  • 8:56 - 8:58
    designer what is azure machine learning
  • 8:58 - 9:00
    designer so initial machine Learning
  • 9:00 - 9:03
    Studio there are several ways to author
  • 9:03 - 9:05
    classification machine learning models
  • 9:05 - 9:08
    one way is to use a visual interface and
  • 9:08 - 9:10
    this visual interface is called designer
  • 9:10 - 9:13
    and you can use it to train test and
  • 9:13 - 9:16
    also deploy machine learning models and
  • 9:16 - 9:18
    the drag and drop interface makes use of
  • 9:18 - 9:20
    clearly defined inputs and outputs that
  • 9:20 - 9:23
    can be shared reused and also Version
  • 9:23 - 9:24
    Control
  • 9:24 - 9:26
    and using the designer you can identify
  • 9:26 - 9:28
    the building blocks or components needed
  • 9:28 - 9:31
    for your model place and connect them on
  • 9:31 - 9:33
    your canvas and run a machine learning
  • 9:33 - 9:35
    job
  • 9:35 - 9:37
    so
  • 9:37 - 9:39
    um each designer project so each project
  • 9:39 - 9:42
    in the designer is known as a pipeline
  • 9:42 - 9:46
    and in the design we have a left panel
  • 9:46 - 9:48
    for navigation and a canvas on your
  • 9:48 - 9:51
    right hand side in which you build your
  • 9:51 - 9:54
    pipeline visually so pipelines let you
  • 9:54 - 9:56
    organize manage and reuse complex
  • 9:56 - 9:58
    machine learning workflows across
  • 9:58 - 10:00
    projects and users
  • 10:00 - 10:03
    a pipeline starts with the data set from
  • 10:03 - 10:04
    which you want to train the model
  • 10:04 - 10:06
    because all begins with data when
  • 10:06 - 10:07
    talking about data science and machine
  • 10:07 - 10:10
    learning and each time you run a
  • 10:10 - 10:11
    pipeline the configuration of the
  • 10:11 - 10:13
    pipeline and its results are stored in
  • 10:13 - 10:17
    your workspace as a pipeline job
  • 10:17 - 10:22
    so the second main concept of azure
  • 10:22 - 10:25
    machine learning is a component so going
  • 10:25 - 10:28
    hierarchically from the pipeline we can
  • 10:28 - 10:31
    say that each building block of a
  • 10:31 - 10:34
    pipeline is called a component
  • 10:34 - 10:37
    learning component encapsulate one step
  • 10:37 - 10:39
    in a machine learning pipeline so it's a
  • 10:39 - 10:42
    reusable piece of code with inputs and
  • 10:42 - 10:44
    outputs something very similar to a
  • 10:44 - 10:46
    function in any programming language
  • 10:46 - 10:49
    and in a pipeline project you can access
  • 10:49 - 10:51
    data assets and components from the left
  • 10:51 - 10:53
    panels
  • 10:53 - 10:56
    asset Library tab as you can see
  • 10:56 - 11:00
    um here in the screenshot in the deck
  • 11:00 - 11:03
    so you can create data assets on using
  • 11:03 - 11:08
    an adoc page called Data Page and a data
  • 11:08 - 11:11
    set is a reference to a data source
  • 11:11 - 11:12
    location
  • 11:12 - 11:16
    so this data source location could be a
  • 11:16 - 11:19
    local file a data store a web file or
  • 11:19 - 11:22
    even an age group open that set
  • 11:22 - 11:24
    and these data assets will appear along
  • 11:24 - 11:26
    with standard sample data set in the
  • 11:26 - 11:30
    designers asset Library
  • 11:31 - 11:32
    um
  • 11:32 - 11:37
    and another basic concept of azure ml is
  • 11:37 - 11:39
    azure machine learning jobs
  • 11:39 - 11:44
    so basically when you submit a pipeline
  • 11:44 - 11:47
    you create a job which will run all the
  • 11:47 - 11:50
    steps in your pipeline so a job execute
  • 11:50 - 11:53
    a task against a specified compute
  • 11:53 - 11:54
    Target
  • 11:54 - 11:57
    jobs enable systematic tracking for your
  • 11:57 - 11:59
    machine learning experimentation in
  • 11:59 - 12:00
    Azure ml
  • 12:00 - 12:02
    and once a job is created azramel
  • 12:02 - 12:05
    maintains a run record uh for for the
  • 12:05 - 12:08
    job
  • 12:08 - 12:12
    um but let's move to the classification
  • 12:12 - 12:14
    steps so
  • 12:14 - 12:17
    um let's introduce uh how to create a
  • 12:17 - 12:21
    classification model in Azure ml but you
  • 12:21 - 12:24
    will see it in more details in a
  • 12:24 - 12:26
    handsome demo that John will will guide
  • 12:26 - 12:29
    through in a few minutes
  • 12:29 - 12:32
    so you can think of the steps to train
  • 12:32 - 12:34
    and evaluate a classification machine
  • 12:34 - 12:37
    learning model as four main steps so
  • 12:37 - 12:38
    first of all you need to prepare your
  • 12:38 - 12:41
    data so you need to identify the
  • 12:41 - 12:43
    features and the label in your data set
  • 12:43 - 12:46
    you need to pre-process so you need to
  • 12:46 - 12:49
    clean and transform the data as needed
  • 12:49 - 12:51
    then the second step of course is
  • 12:51 - 12:53
    training the model
  • 12:53 - 12:55
    and for training the model you need to
  • 12:55 - 12:57
    split the data into two groups a
  • 12:57 - 13:00
    training and a validation set
  • 13:00 - 13:01
    then you train a machine learning model
  • 13:01 - 13:04
    using the training data set and you test
  • 13:04 - 13:05
    the machine learning model for
  • 13:05 - 13:07
    performance using the validation data
  • 13:07 - 13:08
    set
  • 13:08 - 13:12
    the third step is performance evaluation
  • 13:12 - 13:15
    um which means comparing how close the
  • 13:15 - 13:16
    model's predictions are to the known
  • 13:16 - 13:21
    labels and these lead us to compute some
  • 13:21 - 13:23
    evaluation performance metrics
  • 13:23 - 13:26
    and then finally
  • 13:26 - 13:30
    um so these three steps are not
  • 13:30 - 13:33
    um not performed uh every time in a
  • 13:33 - 13:35
    linear manner it's more an iterative
  • 13:35 - 13:39
    process but once you obtain you achieve
  • 13:39 - 13:43
    a a performance with which you are
  • 13:43 - 13:46
    satisfied so you are ready to let's say
  • 13:46 - 13:49
    go into production and you can deploy
  • 13:49 - 13:52
    your train model as a predictive service
  • 13:52 - 13:56
    into a real-time uh to a real-time
  • 13:56 - 13:58
    endpoint and to do so you need to
  • 13:58 - 14:00
    convert the training pipeline into a
  • 14:00 - 14:03
    real-time inference Pipeline and then
  • 14:03 - 14:04
    you can deploy the model as an
  • 14:04 - 14:07
    application on a server or device so
  • 14:07 - 14:11
    that others can consume this model
  • 14:11 - 14:14
    so let's start with the first step which
  • 14:14 - 14:18
    is prepaid data reward data can contain
  • 14:18 - 14:20
    many different issues that can affect
  • 14:20 - 14:22
    the utility of the data and our
  • 14:22 - 14:25
    interpretation of the results so also
  • 14:25 - 14:27
    the machine learning model that you
  • 14:27 - 14:29
    train using this data for example real
  • 14:29 - 14:31
    world data can be affected by a bed
  • 14:31 - 14:34
    recording or a bad measurement and it
  • 14:34 - 14:36
    can also contain missing values for some
  • 14:36 - 14:39
    parameters and Azure machine learning
  • 14:39 - 14:41
    designer has several pre-built
  • 14:41 - 14:43
    components that can be used to prepaid
  • 14:43 - 14:46
    data for training these components
  • 14:46 - 14:48
    enable you to clean data normalize
  • 14:48 - 14:53
    features join tables and and more
  • 14:53 - 14:57
    let's come to uh training so to train a
  • 14:57 - 14:59
    classification model you need a data set
  • 14:59 - 15:02
    that includes historical features so the
  • 15:02 - 15:04
    characteristics of the entity for which
  • 15:04 - 15:07
    one to make a prediction and known label
  • 15:07 - 15:10
    values the label is the class indicator
  • 15:10 - 15:12
    we want to train a model to predict it
  • 15:12 - 15:14
    and it's common practice to train a
  • 15:14 - 15:16
    model using a subset of the data while
  • 15:16 - 15:18
    holding back some data with which to
  • 15:18 - 15:21
    test the train model and this enables
  • 15:21 - 15:22
    you to compare the labels that the model
  • 15:22 - 15:25
    predicts with the actual known labels in
  • 15:25 - 15:27
    the original data set
  • 15:27 - 15:30
    this operation can be performed in the
  • 15:30 - 15:32
    designer using the split data component
  • 15:32 - 15:35
    as shown by the screenshot here in the
  • 15:35 - 15:37
    in the deck
  • 15:37 - 15:40
    there's also another component that you
  • 15:40 - 15:41
    should use which is the score model
  • 15:41 - 15:43
    component to generate the predicted
  • 15:43 - 15:45
    class label value using the validation
  • 15:45 - 15:48
    data as input so once you connect all
  • 15:48 - 15:50
    these components
  • 15:50 - 15:52
    um the component specifying the the
  • 15:52 - 15:55
    model we are going to use the split data
  • 15:55 - 15:57
    component the trained model component
  • 15:57 - 16:00
    and the score model component you want
  • 16:00 - 16:03
    to run an a new experiment in the
  • 16:03 - 16:06
    initial map which will use the data set
  • 16:06 - 16:10
    on the canvas to train and score a model
  • 16:10 - 16:12
    after training a model it is important
  • 16:12 - 16:15
    we say to evaluate its performance to
  • 16:15 - 16:17
    understand how bad how how good sorry
  • 16:17 - 16:21
    our model is performing
  • 16:21 - 16:23
    and there are many performance metrics
  • 16:23 - 16:25
    and methodologies for evaluating how
  • 16:25 - 16:27
    well a model makes predictions the
  • 16:27 - 16:29
    component to use to perform evaluation
  • 16:29 - 16:32
    in Azure ml designer is called as
  • 16:32 - 16:35
    intuitive as it is evaluate model
  • 16:35 - 16:38
    once the job of training and evaluation
  • 16:38 - 16:41
    of the model is completed you can review
  • 16:41 - 16:43
    evaluation metrics on the completed job
  • 16:43 - 16:46
    Page by right clicking on the component
  • 16:46 - 16:48
    in the evaluation results you can also
  • 16:48 - 16:51
    find the so-called confusion Matrix that
  • 16:51 - 16:53
    you can see here in the right side of of
  • 16:53 - 16:55
    this deck
  • 16:55 - 16:57
    a confusion Matrix shows cases where
  • 16:57 - 16:59
    both the predicted and actual values
  • 16:59 - 17:02
    were one uh the so-called true positives
  • 17:02 - 17:04
    at the top left and also cases where
  • 17:04 - 17:07
    both the predicted and the actual values
  • 17:07 - 17:08
    were zero the so-called true negatives
  • 17:08 - 17:11
    at the bottom right while the other
  • 17:11 - 17:14
    cells show cases where the predicting
  • 17:14 - 17:15
    and actual values differ
  • 17:15 - 17:18
    called false positive and false
  • 17:18 - 17:20
    negatives and this is an example of a
  • 17:20 - 17:24
    confusion Matrix for a binary classifier
  • 17:24 - 17:26
    why for a multi-class classification
  • 17:26 - 17:28
    model the same approach is used to
  • 17:28 - 17:30
    tabulate each possible combination of
  • 17:30 - 17:33
    actual and predictive value counts so
  • 17:33 - 17:35
    for example a model with three possible
  • 17:35 - 17:38
    classes would result in three times
  • 17:38 - 17:39
    three Matrix
  • 17:39 - 17:42
    the confusion Matrix is also useful for
  • 17:42 - 17:44
    the metrics that can be derived from it
  • 17:44 - 17:48
    like accuracy recall or precision
  • 17:49 - 17:52
    um we we say that the last step is
  • 17:52 - 17:56
    deploying the train model to a real-time
  • 17:56 - 17:59
    endpoint as a predictive service and in
  • 17:59 - 18:01
    order to automate your model into
  • 18:01 - 18:03
    service that makes continuous
  • 18:03 - 18:05
    predictions you need first of all to
  • 18:05 - 18:08
    create any and and then deploy an
  • 18:08 - 18:10
    inference pipeline the process of
  • 18:10 - 18:12
    converting the training pipeline into a
  • 18:12 - 18:14
    real-time inference pipeline removes
  • 18:14 - 18:16
    training components and adds web service
  • 18:16 - 18:19
    inputs and outputs to handle requests
  • 18:19 - 18:21
    and the inference pipeline performs they
  • 18:21 - 18:23
    seem that the transformation as the
  • 18:23 - 18:26
    first pipeline but for new data then it
  • 18:26 - 18:29
    uses the train model to infer or predict
  • 18:29 - 18:33
    label values based on its feature
  • 18:33 - 18:36
    um so I think I've talked a lot for now
  • 18:36 - 18:40
    I would like to let John show us
  • 18:40 - 18:44
    something in practice uh with uh with
  • 18:44 - 18:47
    the Hands-On demo so please John go
  • 18:47 - 18:50
    ahead sharing your screen and guide us
  • 18:50 - 18:52
    through this demo of creating a
  • 18:52 - 18:54
    classification with the Azure machine
  • 18:54 - 18:56
    learning designer
  • 18:56 - 18:59
    uh thank you so much Carlotta for this
  • 18:59 - 19:01
    interesting explanation of the Azure ml
  • 19:01 - 19:05
    designer and now
  • 19:05 - 19:08
    um I'm going to start with you in the
  • 19:08 - 19:10
    Practical demo part so uh if you want to
  • 19:10 - 19:13
    follow along go to the link that Carlota
  • 19:13 - 19:18
    sent in the chat so like you can do
  • 19:18 - 19:22
    the demo or the Practical part with me
  • 19:22 - 19:25
    I'm just going to share my screen
  • 19:25 - 19:27
    and
  • 19:27 - 19:32
    go here so uh
  • 19:32 - 19:34
    where am I right now I'm inside the
  • 19:34 - 19:37
    Microsoft learn documentation this is
  • 19:37 - 19:40
    the exercise part of this module and we
  • 19:40 - 19:43
    will start by setting two things which
  • 19:43 - 19:45
    are a prequisite for us to work inside
  • 19:45 - 19:50
    this module which are the users group
  • 19:50 - 19:52
    and the Azure machine learning workspace
  • 19:52 - 19:56
    and something extra which is the compute
  • 19:56 - 20:00
    cluster that calendar Target about so I
  • 20:00 - 20:02
    just want to make sure that you all have
  • 20:02 - 20:06
    a resource Group created inside your
  • 20:06 - 20:08
    auditor inside your Microsoft Azure
  • 20:08 - 20:11
    platform so this is my research group
  • 20:11 - 20:15
    inside this is this Resource Group I
  • 20:15 - 20:17
    have created an Azure machine learning
  • 20:17 - 20:22
    workspace so I'm just going to access
  • 20:22 - 20:24
    the workspace that I have created
  • 20:24 - 20:27
    already from this link I am going to
  • 20:27 - 20:30
    open it which is the studio web URL and
  • 20:30 - 20:33
    I will follow the steps so what is this
  • 20:33 - 20:36
    this is your machine learning workspace
  • 20:36 - 20:38
    or machine Learning Studio you can do a
  • 20:38 - 20:40
    lot of things here but we are going to
  • 20:40 - 20:42
    focus mainly on the designer and the
  • 20:42 - 20:46
    data and the compute so another
  • 20:46 - 20:49
    prequisite here as Carlotta told you and
  • 20:49 - 20:51
    we need some resources to power up the
  • 20:51 - 20:54
    the classification the processes that
  • 20:54 - 20:55
    will happen
  • 20:55 - 20:58
    so we have created this Computing
  • 20:58 - 20:59
    cluster
  • 20:59 - 21:03
    and we have like Set uh some presets for
  • 21:03 - 21:04
    it so
  • 21:04 - 21:07
    where can you find this preset you go
  • 21:07 - 21:10
    here under the create compute you'll
  • 21:10 - 21:13
    find everything that you need to do so
  • 21:13 - 21:17
    the size is the Standard ds11 Version 2
  • 21:17 - 21:20
    and it's a CPU not GPU because we don't
  • 21:20 - 21:22
    know the GPU and we don't need a GPU and
  • 21:22 - 21:26
    a like it is ready for us to use
  • 21:26 - 21:31
    the next thing which we will look into
  • 21:31 - 21:34
    is the designer how can you access the
  • 21:34 - 21:35
    designer
  • 21:35 - 21:38
    you can either click on this icon or
  • 21:38 - 21:40
    click on the navigation menu and click
  • 21:40 - 21:42
    on the designer for me
  • 21:42 - 21:43
    um
  • 21:43 - 21:46
    now I am inside my designer
  • 21:46 - 21:48
    what we are going to do now is the
  • 21:48 - 21:50
    pipeline that Carlotta told you about
  • 21:50 - 21:54
    and from where can I know these steps if
  • 21:54 - 21:57
    you follow along in the learn module you
  • 21:57 - 21:59
    will find everything that I'm doing
  • 21:59 - 22:02
    right now in details uh with screenshots
  • 22:02 - 22:06
    of course so I'm going to create a new
  • 22:06 - 22:09
    pipeline and I can do so by clicking on
  • 22:09 - 22:11
    this plus button
  • 22:11 - 22:14
    it's going to redirect me to the
  • 22:14 - 22:17
    designer authoring the pipeline uh where
  • 22:17 - 22:20
    I can drag and drop data and components
  • 22:20 - 22:22
    that the Carlota told you the difference
  • 22:22 - 22:23
    between
  • 22:23 - 22:26
    and here I am going to do some changes
  • 22:26 - 22:29
    to the settings I am going to connect
  • 22:29 - 22:32
    this with my compute cluster that I
  • 22:32 - 22:35
    created previously so I can utilize it
  • 22:35 - 22:38
    from here I'm going to choose this
  • 22:38 - 22:40
    compute cluster demo that I have showed
  • 22:40 - 22:43
    you before in the Clusters here
  • 22:43 - 22:46
    and I am going to change the name to
  • 22:46 - 22:48
    something more meaningful instead of
  • 22:48 - 22:51
    byline and the date of today I'm going
  • 22:51 - 22:54
    to name it diabetes
  • 22:54 - 22:56
    and
  • 22:56 - 23:00
    let's just check this training
  • 23:00 - 23:05
    let's say training 0.1 or okay
  • 23:05 - 23:09
    and I am going to close this tab and in
  • 23:09 - 23:12
    order to have a bigger place to work
  • 23:12 - 23:15
    inside because this is where we will
  • 23:15 - 23:17
    work where everything will happen so I
  • 23:17 - 23:20
    will click on close from here
  • 23:20 - 23:23
    and I will go to the data and I will
  • 23:23 - 23:26
    create a new data set
  • 23:26 - 23:28
    how can I create a new data set there is
  • 23:28 - 23:30
    multiple options here you can find from
  • 23:30 - 23:32
    local files from data store from web
  • 23:32 - 23:34
    files from open data set but I'm going
  • 23:34 - 23:37
    to choose from web files as this is the
  • 23:37 - 23:40
    way we're going to create our data
  • 23:40 - 23:43
    from here the information of my data set
  • 23:43 - 23:47
    I'm going to get them from the Microsoft
  • 23:47 - 23:51
    learn module so if we go to the step
  • 23:51 - 23:53
    that says create a data set
  • 23:53 - 23:55
    under it it illustrates that you can
  • 23:55 - 23:58
    access the data from inside the asset
  • 23:58 - 24:00
    library and inside your asset liability
  • 24:00 - 24:02
    you'll find the data and find the
  • 24:02 - 24:06
    component and I'm going to select
  • 24:06 - 24:09
    this link because this is where my data
  • 24:09 - 24:12
    is stored if you open this link you will
  • 24:12 - 24:15
    find this is this is a CSV file I think
  • 24:15 - 24:17
    yeah and you can like all the data are
  • 24:17 - 24:18
    here
  • 24:18 - 24:21
    all right now let's get back
  • 24:21 - 24:22
    um
  • 24:22 - 24:25
    [Music]
  • 24:27 - 24:28
    and you are going to do something
  • 24:28 - 24:30
    meaningful but because I have already
  • 24:30 - 24:32
    created it before twice so I'm gonna
  • 24:32 - 24:35
    like add a number to the name
  • 24:35 - 24:38
    uh the data set is tabular and there is
  • 24:38 - 24:39
    the file but this is a table so we're
  • 24:39 - 24:41
    going to choose the table
  • 24:41 - 24:42
    [Music]
  • 24:42 - 24:44
    for data set time
  • 24:44 - 24:46
    now we will click on next that's gonna
  • 24:46 - 24:51
    review or uh display for you the content
  • 24:51 - 24:54
    of this file that you have
  • 24:54 - 24:57
    like imported to this workspace
  • 24:57 - 25:02
    and for these settings these are like
  • 25:02 - 25:04
    related to our filed format
  • 25:04 - 25:08
    so this is a delimited file and it's not
  • 25:08 - 25:11
    plain text it's not a Json the delimiter
  • 25:11 - 25:14
    is comma as like we have seen that they
  • 25:14 - 25:17
    those
  • 25:27 - 25:29
    so I'm choosing
  • 25:29 - 25:33
    errors because the only the first five
  • 25:34 - 25:35
    [Music]
  • 25:35 - 25:38
    for example okay uh if you have any
  • 25:38 - 25:40
    doubts if you have any problems please
  • 25:40 - 25:43
    don't hesitate to all right through me
  • 25:43 - 25:45
    in the chat
  • 25:45 - 25:48
    and like what what is blocking you and
  • 25:48 - 25:51
    me and Carlota will try to help you and
  • 25:51 - 25:53
    like whenever possible
  • 25:53 - 25:56
    and now this is the new preview for my
  • 25:56 - 25:58
    data set I can see that I have an ID I
  • 25:58 - 26:00
    have patient ID I have pregnancies I
  • 26:00 - 26:02
    have the age of the people
  • 26:02 - 26:06
    have the body mass together I think
  • 26:06 - 26:08
    and they have diabetical or not as a
  • 26:08 - 26:11
    zero and one zero indicates a negative
  • 26:11 - 26:14
    the person doesn't have diabetes and one
  • 26:14 - 26:16
    indicates a positive that this person
  • 26:16 - 26:18
    has diabetes okay
  • 26:18 - 26:21
    now I'm going to click on next here I am
  • 26:21 - 26:23
    defining my schema all the data types
  • 26:23 - 26:25
    inside my columns the column names which
  • 26:25 - 26:29
    columns to include which to exclude and
  • 26:29 - 26:32
    here we will include everything except
  • 26:32 - 26:36
    the path of the bath color and we are
  • 26:36 - 26:38
    going to review the data types of each
  • 26:38 - 26:40
    column so let's review this first one
  • 26:40 - 26:43
    this is numbers numbers then it's the
  • 26:43 - 26:46
    integer and this is
  • 26:46 - 26:49
    um like decimal
  • 26:49 - 26:51
    dotted
  • 26:51 - 26:54
    decimal number so we are going to choose
  • 26:54 - 26:55
    this data type
  • 26:55 - 26:57
    and for this one
  • 26:57 - 27:01
    it says diabetic and it's a zero under
  • 27:01 - 27:02
    one and we are going to make it as
  • 27:02 - 27:04
    integerables
  • 27:04 - 27:08
    now we are going to click on next and
  • 27:08 - 27:10
    move to reviewing everything this is
  • 27:10 - 27:11
    everything that we have defined together
  • 27:11 - 27:14
    I will click on create
  • 27:14 - 27:15
    and
  • 27:15 - 27:18
    now the first step has ended we have
  • 27:18 - 27:20
    gotten our data ready
  • 27:20 - 27:22
    now what now we're going to utilize the
  • 27:22 - 27:24
    designer
  • 27:24 - 27:27
    um Power we're going to drag and drop
  • 27:27 - 27:30
    our data set to create the pipeline
  • 27:30 - 27:33
    so I have like click on it and drag it
  • 27:33 - 27:36
    to this space it's gonna appear to you
  • 27:36 - 27:40
    and we can inspect it by right click and
  • 27:40 - 27:42
    choose preview data
  • 27:42 - 27:46
    to see what we have created together
  • 27:46 - 27:49
    from here you can see everything that we
  • 27:49 - 27:51
    have like seen previously but in more
  • 27:51 - 27:53
    details and we are just going to close
  • 27:53 - 27:57
    this now what now we are gonna do the
  • 27:57 - 28:01
    processing that Carlota like mentioned
  • 28:01 - 28:04
    these are some instructions about the
  • 28:04 - 28:05
    data about how you can loot them how you
  • 28:05 - 28:07
    can open them but we are going to move
  • 28:07 - 28:10
    to the transformation or the processing
  • 28:10 - 28:14
    so as Carlotta told you like any data
  • 28:14 - 28:15
    for us to work on we have to do some
  • 28:15 - 28:17
    processing to it
  • 28:17 - 28:20
    to make it easy easier for the model to
  • 28:20 - 28:23
    be trained and easier to work with so uh
  • 28:23 - 28:26
    we're gonna do the normalization and
  • 28:26 - 28:29
    normalization meaning is uh
  • 28:29 - 28:34
    to scale our data either down or up but
  • 28:34 - 28:35
    we're going to scale them down
  • 28:35 - 28:39
    and like we are going to decrease and
  • 28:39 - 28:41
    relatively decrease
  • 28:41 - 28:45
    the the values all the values to work
  • 28:45 - 28:48
    with lower numbers and if we are working
  • 28:48 - 28:50
    with larger numbers it's going to take
  • 28:50 - 28:52
    more time if we're working with smaller
  • 28:52 - 28:55
    numbers it's going to take less time to
  • 28:55 - 28:59
    calculate them and that's it so
  • 28:59 - 29:02
    where can I find the normalized data I
  • 29:02 - 29:04
    can find it inside my component
  • 29:04 - 29:07
    so I will choose the component and
  • 29:07 - 29:10
    search for normalized data
  • 29:10 - 29:12
    I will drag and drop it as usual and I
  • 29:12 - 29:15
    will connect between these two things
  • 29:15 - 29:18
    by clicking on this spot this like
  • 29:18 - 29:20
    Circle and
  • 29:20 - 29:23
    drag and drop until the next circuit
  • 29:23 - 29:25
    now we are going to Define our
  • 29:25 - 29:27
    normalization method
  • 29:27 - 29:31
    so I'm going to double click on the
  • 29:31 - 29:33
    normalized data
  • 29:33 - 29:35
    it's going to open the settings for the
  • 29:35 - 29:36
    normalization
  • 29:36 - 29:39
    as better transformation method which is
  • 29:39 - 29:40
    a mathematical way
  • 29:40 - 29:42
    that is going to scale our data
  • 29:42 - 29:45
    according to
  • 29:45 - 29:48
    we're going to choose min max and for
  • 29:48 - 29:52
    this one we are going to choose use 0
  • 29:52 - 29:53
    for constant column we are going to
  • 29:53 - 29:54
    choose true
  • 29:54 - 29:57
    and we are going to Define which columns
  • 29:57 - 29:59
    to normalize so we are not going to
  • 29:59 - 30:01
    normalize the whole data set we are
  • 30:01 - 30:03
    going to choose a subset from the data
  • 30:03 - 30:05
    set to normalize so we're going to
  • 30:05 - 30:07
    choose everything except for the patient
  • 30:07 - 30:09
    ID and the diabetic because the patient
  • 30:09 - 30:11
    ID is a number but it's a categorical
  • 30:11 - 30:14
    data it describes a vision it's not a
  • 30:14 - 30:17
    number that I can sum I can say patient
  • 30:17 - 30:20
    ID number one plus patient ID number two
  • 30:20 - 30:22
    no this is a patient and another
  • 30:22 - 30:23
    location it's not a number that I can do
  • 30:23 - 30:26
    mathematical operations on so I'm not
  • 30:26 - 30:28
    going to choose it so we will choose
  • 30:28 - 30:31
    everything as I said except for the
  • 30:31 - 30:33
    diabetic and the patient might I will
  • 30:33 - 30:35
    click on Save
  • 30:35 - 30:38
    and it's not showing me a warning again
  • 30:38 - 30:39
    everything is good
  • 30:39 - 30:42
    now I can click on submit
  • 30:42 - 30:47
    and review my normalization output
  • 30:47 - 30:48
    um
  • 30:48 - 30:52
    so uh if you click on submit here
  • 30:52 - 30:55
    and you will like choose create new and
  • 30:55 - 30:56
    set the name that is mentioned here
  • 30:56 - 31:00
    inside the notebook so it it tells you
  • 31:00 - 31:03
    to like create a job and name it name
  • 31:03 - 31:05
    the experiment Ms learn diabetes
  • 31:05 - 31:07
    training because you will continue
  • 31:07 - 31:10
    working on and building component later
  • 31:10 - 31:13
    I have it already like created I am the
  • 31:13 - 31:17
    like we can review it together so uh let
  • 31:17 - 31:20
    me just open this in another tab I think
  • 31:20 - 31:21
    I have it
  • 31:21 - 31:24
    here
  • 31:26 - 31:28
    okay
  • 31:31 - 31:35
    so these are all the jobs that I have
  • 31:35 - 31:37
    read them
  • 31:38 - 31:40
    all the jobs there let's do this over
  • 31:40 - 31:42
    these are all the jobs that I have
  • 31:42 - 31:44
    submitted previously
  • 31:44 - 31:46
    and I think this one is the
  • 31:46 - 31:48
    normalization job so let's see the
  • 31:48 - 31:50
    output of it
  • 31:50 - 31:54
    as you can see it says uh check mark yes
  • 31:54 - 31:57
    which means that it worked and we can
  • 31:57 - 31:59
    preview it how can I do that right click
  • 31:59 - 32:03
    on it choose preview data
  • 32:03 - 32:07
    and as you can see all the data are
  • 32:07 - 32:08
    scaled down
  • 32:08 - 32:11
    so everything is between zero
  • 32:11 - 32:16
    and uh one I think
  • 32:16 - 32:19
    so like everything is good for us now we
  • 32:19 - 32:22
    can move forward to the next step
  • 32:22 - 32:28
    which is to create the whole pipeline so
  • 32:28 - 32:31
    uh Carlota told you that
  • 32:31 - 32:33
    we're going to use a classification
  • 32:33 - 32:37
    model to create this data set so uh let
  • 32:37 - 32:41
    me just drag and drop everything
  • 32:41 - 32:45
    to get runtime and we're doing
  • 32:46 - 32:50
    about about everything by
  • 32:51 - 32:53
    so
  • 32:53 - 32:57
    as a result we are going to explain
  • 33:00 - 33:04
    yeah so I'm going to give this split
  • 33:04 - 33:06
    data I'm going to take the
  • 33:06 - 33:09
    transformation data to split data and
  • 33:09 - 33:10
    connect it like that
  • 33:10 - 33:12
    I'm going to get a three model
  • 33:12 - 33:15
    components because I want to train my
  • 33:15 - 33:17
    model
  • 33:17 - 33:20
    and I'm going to put it right here
  • 33:20 - 33:22
    okay
  • 33:22 - 33:24
    like let's just move it down there okay
  • 33:24 - 33:27
    and we are going to use a classification
  • 33:27 - 33:29
    model
  • 33:29 - 33:32
    a two class
  • 33:32 - 33:35
    logistic regression model
  • 33:35 - 33:38
    so I'm going to give this algorithm to
  • 33:38 - 33:41
    enable my model to work
  • 33:42 - 33:46
    this is the untrained model this is
  • 33:46 - 33:48
    here
  • 33:48 - 33:51
    the left the left
  • 33:51 - 33:53
    the left like Circle I'm going to
  • 33:53 - 33:55
    connect it to the data set and the right
  • 33:55 - 33:57
    one we are going to connect it to
  • 33:57 - 34:00
    evaluate model
  • 34:00 - 34:03
    evaluate model so let's search for
  • 34:03 - 34:05
    evaluate model here
  • 34:05 - 34:07
    so because we want to do what we want to
  • 34:07 - 34:11
    evaluate our model and see how it it has
  • 34:11 - 34:15
    been doing it is it good is it bad
  • 34:15 - 34:18
    um sorry like
  • 34:20 - 34:23
    this is
  • 34:23 - 34:26
    down there
  • 34:26 - 34:28
    after the school
  • 34:28 - 34:31
    so we have to get the score model first
  • 34:31 - 34:34
    so let's get it
  • 34:34 - 34:36
    and this will take the trained model and
  • 34:36 - 34:37
    the data set
  • 34:37 - 34:39
    to score our model and see if it's
  • 34:39 - 34:42
    performing good or bad
  • 34:42 - 34:45
    and
  • 34:45 - 34:47
    um
  • 34:47 - 34:49
    after that like we have finished
  • 34:49 - 34:51
    everything now we are going to do the
  • 34:51 - 34:52
    what
  • 34:52 - 34:54
    the presets for everything
  • 34:54 - 34:57
    as a starter we will be splitting our
  • 34:57 - 34:59
    data so
  • 34:59 - 35:01
    how are we going to do this according to
  • 35:01 - 35:04
    what to the split rules so I'm going to
  • 35:04 - 35:06
    double click on and choose split rows
  • 35:06 - 35:09
    and the percentage is
  • 35:09 - 35:13
    70 percent for the and 30 percent of the
  • 35:13 - 35:15
    data for
  • 35:15 - 35:18
    the valuation or for the scoring okay
  • 35:18 - 35:21
    I'm going to make it a randomization so
  • 35:21 - 35:23
    I'm going to split data randomly and the
  • 35:23 - 35:26
    seat is uh
  • 35:26 - 35:29
    132 23 I think yeah
  • 35:29 - 35:33
    and I think that's it
  • 35:33 - 35:35
    the split says why this holes and that's
  • 35:35 - 35:36
    good
  • 35:36 - 35:40
    now for the next one which is the train
  • 35:40 - 35:42
    model we are going to connect it as
  • 35:42 - 35:44
    mentioned here
  • 35:44 - 35:49
    and like we have done that and then why
  • 35:49 - 35:51
    am I having here like let's double click
  • 35:51 - 35:55
    on it yeah it has like it needs the
  • 35:55 - 35:57
    label column that I am trying to predict
  • 35:57 - 35:59
    so from here I'm going to choose
  • 35:59 - 36:01
    diabetic I'm going to save
  • 36:01 - 36:05
    I'm going to close this one
  • 36:06 - 36:07
    so it says here
  • 36:07 - 36:11
    the diabetic label the model it will
  • 36:11 - 36:12
    predict the zero and one because this is
  • 36:12 - 36:15
    a binary classification algorithm so
  • 36:15 - 36:16
    it's going to predict either this or
  • 36:16 - 36:18
    that
  • 36:18 - 36:19
    and
  • 36:19 - 36:20
    um
  • 36:20 - 36:24
    I think that's everything to run the the
  • 36:24 - 36:26
    pipeline
  • 36:26 - 36:29
    so everything is done everything is good
  • 36:29 - 36:31
    for this one we're just gonna leave it
  • 36:31 - 36:34
    like for now because this is the next
  • 36:34 - 36:36
    step
  • 36:36 - 36:40
    um this will like be put instead of the
  • 36:40 - 36:44
    score model but then it's
  • 36:44 - 36:47
    delete it for now
  • 36:47 - 36:50
    okay
  • 36:50 - 36:53
    now we have to submit the job in order
  • 36:53 - 36:56
    to see the output of it so I can click
  • 36:56 - 36:59
    on submit and choose the previous job
  • 36:59 - 37:01
    which is the one that I have showed you
  • 37:01 - 37:02
    before
  • 37:02 - 37:05
    and then let's review its output
  • 37:05 - 37:07
    together here
  • 37:07 - 37:10
    so if I go to the jobs
  • 37:10 - 37:15
    if I go to Ms learn maybe it is training
  • 37:15 - 37:18
    I think it's the one that lasted the
  • 37:18 - 37:21
    longest this one here
  • 37:21 - 37:24
    so here I can see
  • 37:24 - 37:27
    the job output what happened inside
  • 37:27 - 37:30
    the model as you can see
  • 37:30 - 37:34
    so the normalization we have like seen
  • 37:34 - 37:37
    before the split data I can preview it
  • 37:37 - 37:39
    the result one or the result two as it
  • 37:39 - 37:42
    splits the data to 70 here and three
  • 37:42 - 37:44
    thirty percent here
  • 37:44 - 37:47
    um I can see the score model which is
  • 37:47 - 37:49
    like something that we need
  • 37:49 - 37:52
    to review
  • 37:52 - 37:57
    um inside the scroll model uh like from
  • 37:57 - 37:58
    here
  • 37:58 - 38:01
    we can see that
  • 38:01 - 38:04
    let's get back here
  • 38:06 - 38:08
    like this is the data that the model has
  • 38:08 - 38:12
    been scored and this is a scoring output
  • 38:12 - 38:15
    so it says code label true and if he is
  • 38:15 - 38:18
    not diabetic so this is
  • 38:18 - 38:19
    um
  • 38:19 - 38:22
    around prediction let's say
  • 38:22 - 38:24
    for this one it's true and true and this
  • 38:24 - 38:27
    is like a good like what do you say
  • 38:27 - 38:29
    prediction and the probabilities of this
  • 38:29 - 38:30
    score
  • 38:30 - 38:33
    which means the certainty of our model
  • 38:33 - 38:37
    of that this is really true it's 80 for
  • 38:37 - 38:39
    this one is 75
  • 38:39 - 38:43
    so these are some cool metrics that we
  • 38:43 - 38:45
    can review to understand how our model
  • 38:45 - 38:48
    is performing it's performing good for
  • 38:48 - 38:49
    now
  • 38:49 - 38:53
    let's check our evaluation model
  • 38:53 - 38:57
    so this is the extra one that I told you
  • 38:57 - 39:00
    about instead of the like
  • 39:00 - 39:02
    score model only we are going to add
  • 39:02 - 39:04
    what evaluate model
  • 39:04 - 39:07
    after it so here
  • 39:07 - 39:09
    we're going to go to our asset library
  • 39:09 - 39:12
    and we are going to choose the evaluate
  • 39:12 - 39:15
    model
  • 39:15 - 39:18
    and we are going to put it here and we
  • 39:18 - 39:20
    are going to connect it and we are going
  • 39:20 - 39:23
    to submit the job using the same name of
  • 39:23 - 39:25
    the job that we used previously
  • 39:25 - 39:30
    let's review it uh also so after it
  • 39:30 - 39:33
    finishes you will find it here so I have
  • 39:33 - 39:35
    already done it before this is how I'm
  • 39:35 - 39:37
    able to see the output
  • 39:37 - 39:40
    so let's see
  • 39:40 - 39:43
    what what is the output of this
  • 39:43 - 39:46
    evaluation process
  • 39:46 - 39:50
    here it mentioned to you that there are
  • 39:50 - 39:51
    some metrics
  • 39:51 - 39:55
    like the confusion Matrix which Carlotta
  • 39:55 - 39:57
    told you about there is the accuracy the
  • 39:57 - 40:00
    Precision the recall and F1 School
  • 40:00 - 40:02
    every Matrix gives us some insight about
  • 40:02 - 40:05
    our model it helps us to understand it
  • 40:05 - 40:09
    more more and
  • 40:09 - 40:11
    like understand if it's overfitting if
  • 40:11 - 40:12
    it's good if it's bad and really really
  • 40:12 - 40:16
    like understand how it's working
  • 40:17 - 40:20
    now I'm just waiting for the job to load
  • 40:20 - 40:23
    until it loads
  • 40:23 - 40:24
    um
  • 40:24 - 40:26
    we can continue to
  • 40:26 - 40:29
    to work on our
  • 40:29 - 40:32
    model so I will go to my designer I'm
  • 40:32 - 40:35
    just going to confirm this
  • 40:35 - 40:38
    and I'm going to continue working on it
  • 40:38 - 40:40
    from
  • 40:40 - 40:42
    where we have stopped where have we
  • 40:42 - 40:44
    stopped
  • 40:44 - 40:46
    we have stopped on the evaluate model so
  • 40:46 - 40:49
    I'm going to choose this one
  • 40:49 - 40:53
    and it says here
  • 40:54 - 40:57
    select experiment create inference
  • 40:57 - 40:58
    pipeline so
  • 40:58 - 41:01
    I am going to go to the jobs
  • 41:01 - 41:05
    I'm going to select my experiment
  • 41:05 - 41:07
    I hope this works
  • 41:07 - 41:10
    okay salute finally now we have our
  • 41:10 - 41:12
    evaluate model output
  • 41:12 - 41:15
    let's previews evaluation results
  • 41:15 - 41:19
    and uh
  • 41:19 - 41:22
    cool come on
  • 41:26 - 41:28
    finally now we can create our inference
  • 41:28 - 41:31
    pipeline so
  • 41:31 - 41:34
    I think it says that
  • 41:34 - 41:35
    um
  • 41:35 - 41:38
    select the experiment then select Ms
  • 41:38 - 41:39
    learn so
  • 41:39 - 41:43
    I am just going to select it
  • 41:43 - 41:48
    and finally now we can the ROC curve we
  • 41:48 - 41:51
    can see it that the true positive rate
  • 41:51 - 41:54
    and the force was integrate the false
  • 41:54 - 41:57
    positive rate is increasing with time
  • 41:57 - 42:01
    and also the true positive rate true
  • 42:01 - 42:04
    positive is something that it predicted
  • 42:04 - 42:07
    that it is uh positive it has diabetes
  • 42:07 - 42:09
    and it's really a it's really true it
  • 42:09 - 42:13
    the person really has diabetes okay and
  • 42:13 - 42:15
    for the false positive it predicted that
  • 42:15 - 42:18
    someone has diabetes and someone doesn't
  • 42:18 - 42:21
    has it this is what true position and
  • 42:21 - 42:25
    false positive means this is The Recoil
  • 42:25 - 42:28
    curve so we can like review the metrics
  • 42:28 - 42:32
    of our model this is the lift curve I
  • 42:32 - 42:36
    can change the threshold of my confusion
  • 42:36 - 42:38
    Matrix here
  • 42:38 - 42:39
    and this could look don't want to add
  • 42:39 - 42:44
    anything about the the the graphs and
  • 42:44 - 42:47
    you can do so
  • 42:50 - 42:51
    um
  • 42:51 - 42:55
    yeah so just wanted to if you go yeah I
  • 42:55 - 42:57
    just wanted to comment comment for the
  • 42:57 - 43:00
    RSC curve uh that actually from this
  • 43:00 - 43:04
    graph the metric which uh usually we're
  • 43:04 - 43:07
    going to compute is the end area under
  • 43:07 - 43:10
    under the curve and this coefficient or
  • 43:10 - 43:12
    metric
  • 43:12 - 43:15
    um it's a confusion
  • 43:15 - 43:18
    um is a value that could span from from
  • 43:18 - 43:23
    zero to one and the the highest is
  • 43:23 - 43:23
    um
  • 43:23 - 43:27
    this the highest is the the score so the
  • 43:27 - 43:29
    the closest one
  • 43:29 - 43:33
    um so the the highest is the amount of
  • 43:33 - 43:35
    area under this curve
  • 43:35 - 43:40
    um the the the highest performance uh we
  • 43:40 - 43:43
    we've got from from our model and
  • 43:43 - 43:46
    another thing is what John is
  • 43:46 - 43:50
    um playing with so this threshold for
  • 43:50 - 43:51
    the logistic
  • 43:51 - 43:56
    regression is the threshold used by the
  • 43:56 - 43:57
    model
  • 43:57 - 43:59
    um to
  • 43:59 - 44:00
    um
  • 44:00 - 44:03
    to predict uh if the category is zero or
  • 44:03 - 44:05
    one so if the probability the
  • 44:05 - 44:09
    probability score is above the threshold
  • 44:09 - 44:12
    then the category will be predicted as
  • 44:12 - 44:15
    one while if the the probability is
  • 44:15 - 44:17
    below the threshold in this case for
  • 44:17 - 44:21
    example 0.5 the category is predicted as
  • 44:21 - 44:24
    as zero so that's why it's very
  • 44:24 - 44:26
    important to um to choose the the
  • 44:26 - 44:28
    threshold because the performance really
  • 44:28 - 44:30
    can vary
  • 44:30 - 44:31
    um
  • 44:31 - 44:34
    with this threshold value
  • 44:34 - 44:41
    uh thank you uh so much uh kellota and
  • 44:41 - 44:44
    as I mentioned now we are going to like
  • 44:44 - 44:47
    create our inference pipeline so we are
  • 44:47 - 44:49
    going to select the latest one which I
  • 44:49 - 44:51
    already have it opened here this is the
  • 44:51 - 44:53
    one that we were reviewing together this
  • 44:53 - 44:56
    is where we have stopped and we're going
  • 44:56 - 44:58
    to create an inference pipeline we are
  • 44:58 - 45:00
    going to choose a real-time inference
  • 45:00 - 45:03
    pipeline okay
  • 45:03 - 45:05
    um from where I can find this here as it
  • 45:05 - 45:08
    says real-time inference pipeline
  • 45:08 - 45:11
    so it's gonna add some things to my
  • 45:11 - 45:12
    workspace it's going to add the web
  • 45:12 - 45:14
    service inboard it's going to have the
  • 45:14 - 45:16
    web service output because we will be
  • 45:16 - 45:18
    creating it as a web service to access
  • 45:18 - 45:20
    it from the internet
  • 45:20 - 45:22
    uh what are we going to do we're going
  • 45:22 - 45:25
    to remove this diabetes data okay
  • 45:25 - 45:28
    and we are going to get a component
  • 45:28 - 45:29
    called Web
  • 45:29 - 45:33
    input and what's up let me check
  • 45:33 - 45:36
    it's enter data manually
  • 45:36 - 45:38
    we have we already have the with input
  • 45:38 - 45:40
    present
  • 45:40 - 45:42
    so we are going to get the entire data
  • 45:42 - 45:43
    manually
  • 45:43 - 45:45
    and we're going to collect it to connect
  • 45:45 - 45:50
    it as it was connected before like that
  • 45:50 - 45:53
    and also I am not going to directly take
  • 45:53 - 45:55
    the web service sorry escort model to
  • 45:55 - 45:58
    the web service output like that
  • 45:58 - 46:00
    I'm going to delete this
  • 46:00 - 46:04
    and I'm going to execute a python script
  • 46:04 - 46:06
    before
  • 46:06 - 46:10
    I display my result
  • 46:11 - 46:12
    so
  • 46:12 - 46:17
    this will be connected like okay but
  • 46:19 - 46:20
    so
  • 46:20 - 46:24
    the other way around
  • 46:24 - 46:28
    and from here I am going to connect this
  • 46:28 - 46:31
    with that and there is some data uh that
  • 46:31 - 46:33
    we will be getting from the node or from
  • 46:33 - 46:38
    the the examination here and this is the
  • 46:38 - 46:41
    data that will be entered like to our
  • 46:41 - 46:44
    website manually okay this is instead of
  • 46:44 - 46:47
    the data that we have been getting from
  • 46:47 - 46:50
    our data set that we created so I'm just
  • 46:50 - 46:52
    going to double click on it and choose
  • 46:52 - 46:56
    CSV and I will choose it has headers
  • 46:56 - 47:01
    and I will take or copy this content and
  • 47:01 - 47:03
    put it there okay
  • 47:03 - 47:06
    so let's do it
  • 47:06 - 47:08
    I think I have to click on edit code now
  • 47:08 - 47:11
    I can click on Save and I can close it
  • 47:11 - 47:13
    another thing which is the python script
  • 47:13 - 47:17
    that we will be executing
  • 47:17 - 47:19
    um yeah we are going to remove this also
  • 47:19 - 47:21
    we don't need the evaluate model anymore
  • 47:21 - 47:24
    so we are going to remove
  • 47:24 - 47:29
    script that I will be executing okay
  • 47:29 - 47:33
    I can find it here
  • 47:34 - 47:35
    um
  • 47:35 - 47:36
    yeah
  • 47:36 - 47:39
    this is the python script that we will
  • 47:39 - 47:42
    execute and it says to you that this
  • 47:42 - 47:44
    code selects only the patient's ID
  • 47:44 - 47:45
    that's correct label the school
  • 47:45 - 47:48
    probability and return returns them to
  • 47:48 - 47:50
    the web service output so we don't want
  • 47:50 - 47:52
    to return all the columns as we have
  • 47:52 - 47:53
    seen previously
  • 47:53 - 47:56
    uh the determines everything
  • 47:56 - 47:57
    so
  • 47:57 - 47:59
    we want to return certain stuff the
  • 47:59 - 48:03
    stuff that we will use inside our
  • 48:03 - 48:06
    endpoint so I'm just going to select
  • 48:06 - 48:08
    everything and delete it and
  • 48:08 - 48:11
    paste the code that I have gotten from
  • 48:11 - 48:14
    the uh
  • 48:14 - 48:16
    the Microsoft learn Docs
  • 48:16 - 48:19
    now I can click on Save and I can close
  • 48:19 - 48:20
    this
  • 48:20 - 48:22
    let me check something I don't think
  • 48:22 - 48:25
    it's saved it's saved but the display is
  • 48:25 - 48:26
    wrong okay
  • 48:26 - 48:30
    and now I think everything is good to go
  • 48:30 - 48:33
    I'm just gonna double check everything
  • 48:33 - 48:36
    so uh yeah we are gonna change the name
  • 48:36 - 48:39
    of this uh
  • 48:39 - 48:41
    Pipeline and we are gonna call it
  • 48:41 - 48:43
    predict
  • 48:43 - 48:46
    diabetes okay
  • 48:46 - 48:50
    now let's close it and
  • 48:50 - 48:57
    I think that we are good to go so
  • 48:57 - 48:59
    um
  • 49:00 - 49:04
    okay I think everything is good for us
  • 49:06 - 49:08
    I just want to make sure of something is
  • 49:08 - 49:12
    the data is correct the data is uh yeah
  • 49:12 - 49:14
    it's correct
  • 49:14 - 49:16
    okay now I can run the pipeline let's
  • 49:16 - 49:18
    submit
  • 49:18 - 49:21
    select an existing Pipeline and we're
  • 49:21 - 49:23
    going to choose the MS layer and
  • 49:23 - 49:25
    diabetes training which is the pipeline
  • 49:25 - 49:27
    that we have been working on
  • 49:27 - 49:32
    from the beginning of this module
  • 49:32 - 49:34
    I don't think that this is going to take
  • 49:34 - 49:36
    much time so we have submitted the job
  • 49:36 - 49:37
    and it's running
  • 49:37 - 49:40
    until the job ends we are going to set
  • 49:40 - 49:42
    everything
  • 49:42 - 49:46
    and for deploying a service
  • 49:46 - 49:50
    in order to deploy a service okay
  • 49:50 - 49:51
    um
  • 49:51 - 49:54
    I have to have the job ready so
  • 49:54 - 49:56
    until it's ready or you can deploy it so
  • 49:56 - 49:58
    let's go to the job the job details from
  • 49:58 - 50:01
    here okay
  • 50:01 - 50:05
    and until it finishes
  • 50:05 - 50:07
    Carlotta do you think that we can have
  • 50:07 - 50:09
    the questions and then we can get back
  • 50:09 - 50:13
    to the job I'm deploying it
  • 50:14 - 50:18
    yeah yeah yeah so yeah yeah guys if you
  • 50:18 - 50:19
    have any questions
  • 50:19 - 50:24
    uh on on what you just uh just saw here
  • 50:24 - 50:27
    or into introductions feel free this is
  • 50:27 - 50:30
    a good moment we can uh we can discuss
  • 50:30 - 50:34
    now while we wait for this job to to
  • 50:34 - 50:36
    finish
  • 50:36 - 50:39
    uh and the
  • 50:39 - 50:40
    can can
  • 50:40 - 50:45
    we have the energy check one or like
  • 50:45 - 50:48
    what do you think uh yeah we can also go
  • 50:48 - 50:50
    to the knowledge check
  • 50:50 - 50:51
    um
  • 50:51 - 50:56
    yeah okay so let me share my screen
  • 50:56 - 50:59
    yeah so if you have not any questions
  • 50:59 - 51:02
    for us we can maybe propose some
  • 51:02 - 51:05
    questions to to you that you can
  • 51:05 - 51:06
    um
  • 51:06 - 51:10
    uh to check our knowledge so far and you
  • 51:10 - 51:13
    can uh maybe answer to these questions
  • 51:13 - 51:15
    uh via chat
  • 51:15 - 51:18
    um so we have do you see my screen can
  • 51:18 - 51:20
    you see my screen
  • 51:20 - 51:22
    yes
  • 51:22 - 51:25
    um so John I think I will read this
  • 51:25 - 51:29
    question loud and ask it to you okay so
  • 51:29 - 51:32
    are you ready to transfer
  • 51:32 - 51:34
    yes I am
  • 51:34 - 51:35
    so
  • 51:35 - 51:37
    um you're using Azure machine learning
  • 51:37 - 51:40
    designer to create a training pipeline
  • 51:40 - 51:43
    for a binary classification model so
  • 51:43 - 51:45
    what what we were doing in our demo
  • 51:45 - 51:48
    right and you have added a data set
  • 51:48 - 51:52
    containing features and labels uh a true
  • 51:52 - 51:54
    class decision Forest module so we used
  • 51:54 - 51:57
    a logistic regression model our
  • 51:57 - 51:59
    um in our example here we're using A2
  • 51:59 - 52:01
    class decision force model
  • 52:01 - 52:04
    and of course a trained model model you
  • 52:04 - 52:07
    plan now to use score model and evaluate
  • 52:07 - 52:09
    model modules to test the train model
  • 52:09 - 52:12
    with the subset of the data set that
  • 52:12 - 52:14
    wasn't used for training
  • 52:14 - 52:16
    but what are we missing so what's
  • 52:16 - 52:19
    another model you should add and we have
  • 52:19 - 52:22
    three options we have join data we have
  • 52:22 - 52:25
    split data or we have select columns in
  • 52:25 - 52:27
    in that set
  • 52:27 - 52:28
    so
  • 52:28 - 52:32
    um while John thinks about the answer uh
  • 52:32 - 52:34
    go ahead and
  • 52:34 - 52:35
    um
  • 52:35 - 52:38
    answer yourself so give us your your
  • 52:38 - 52:40
    guess
  • 52:40 - 52:42
    put in the chat or just come off mute
  • 52:42 - 52:45
    and announcer
  • 52:47 - 52:49
    a b yes
  • 52:49 - 52:51
    yeah what do you think is the correct
  • 52:51 - 52:54
    answer for this one I need something to
  • 52:54 - 52:57
    uh like I have to score my model and I
  • 52:57 - 53:00
    have to evaluate it so I I like I need
  • 53:00 - 53:03
    something to enable me to do these two
  • 53:03 - 53:05
    things
  • 53:07 - 53:09
    I think it's something you showed us in
  • 53:09 - 53:13
    in your pipeline right John
  • 53:13 - 53:17
    of course I did
  • 53:23 - 53:28
    uh we have no guests yeah
  • 53:28 - 53:32
    can someone like someone want to guess
  • 53:32 - 53:36
    uh we have a b yeah
  • 53:36 - 53:39
    uh maybe
  • 53:39 - 53:43
    so uh in order to do this in order to do
  • 53:43 - 53:46
    this I mentioned the
  • 53:46 - 53:49
    the module that is going to help me to
  • 53:49 - 53:54
    to divide my data into two things 70 for
  • 53:54 - 53:56
    the training and thirty percent for the
  • 53:56 - 53:59
    evaluation so what did I use I used
  • 53:59 - 54:02
    split data because this is what is going
  • 54:02 - 54:05
    to split my data randomly into training
  • 54:05 - 54:09
    data and validation data so the correct
  • 54:09 - 54:12
    answer is B and good job eek thank you
  • 54:12 - 54:14
    for participating
  • 54:14 - 54:17
    next question please
  • 54:17 - 54:19
    yes
  • 54:19 - 54:23
    answer so thanks John
  • 54:23 - 54:26
    uh for uh explaining us the the correct
  • 54:26 - 54:27
    one
  • 54:27 - 54:30
    and we want to go with question two
  • 54:30 - 54:33
    yeah so uh I'm going to ask you now
  • 54:33 - 54:36
    karnata you use Azure machine learning
  • 54:36 - 54:38
    designer to create a training pipeline
  • 54:38 - 54:40
    for your classification model
  • 54:40 - 54:44
    what must you do before you deploy this
  • 54:44 - 54:46
    model as a service you have to do
  • 54:46 - 54:48
    something before you deploy it what do
  • 54:48 - 54:50
    you think is the correct answer
  • 54:50 - 54:53
    is it a b or c
  • 54:53 - 54:55
    share your thoughts without in touch
  • 54:55 - 54:58
    with us in the chat and
  • 54:58 - 55:00
    um and I'm also going to give you some
  • 55:00 - 55:03
    like minutes to think of it before I
  • 55:03 - 55:06
    like tell you about
  • 55:07 - 55:09
    yeah so let me go through the possible
  • 55:09 - 55:12
    answers right so we have a uh create an
  • 55:12 - 55:15
    inference pipeline from the training
  • 55:15 - 55:16
    pipeline
  • 55:16 - 55:19
    uh B we have ADD and evaluate model
  • 55:19 - 55:22
    module to the training Pipeline and then
  • 55:22 - 55:25
    three we have uh clone the training
  • 55:25 - 55:29
    Pipeline with a different name
  • 55:30 - 55:32
    so what do you think is the correct
  • 55:32 - 55:34
    answer a b or c
  • 55:34 - 55:37
    uh also this time I think it's something
  • 55:37 - 55:39
    we mentioned both in the decks and in
  • 55:39 - 55:42
    the demo right
  • 55:43 - 55:45
    yes it is
  • 55:45 - 55:49
    it's something that I have done like two
  • 55:49 - 55:52
    like five minutes ago
  • 55:52 - 55:57
    it's real time real time what's
  • 55:58 - 55:59
    um
  • 55:59 - 56:02
    yeah so think about you need to deploy
  • 56:02 - 56:05
    uh the model as a service so uh if I'm
  • 56:05 - 56:08
    going to deploy model
  • 56:08 - 56:10
    um I cannot like evaluate the model
  • 56:10 - 56:13
    after deploying it right because I
  • 56:13 - 56:15
    cannot go into production if I'm not
  • 56:15 - 56:18
    sure I'm not satisfied over my model and
  • 56:18 - 56:20
    I'm not sure that my model is performing
  • 56:20 - 56:20
    well
  • 56:20 - 56:23
    so that's why I would go with
  • 56:23 - 56:24
    um
  • 56:24 - 56:30
    I would like exclude B from from my from
  • 56:30 - 56:32
    my answer
  • 56:32 - 56:34
    uh while
  • 56:34 - 56:37
    um thinking about C uh I don't see you I
  • 56:37 - 56:39
    didn't see you John cloning uh the
  • 56:39 - 56:41
    training Pipeline with a different name
  • 56:41 - 56:45
    uh so I I don't think this is the the
  • 56:45 - 56:47
    right answer
  • 56:47 - 56:50
    um while I've seen you creating an
  • 56:50 - 56:53
    inference pipeline uh yeah from the
  • 56:53 - 56:55
    training Pipeline and you just converted
  • 56:55 - 56:59
    it using uh a one-click button right
  • 56:59 - 57:03
    yeah that's correct so uh this is the
  • 57:03 - 57:04
    right answer
  • 57:04 - 57:07
    uh good job so I created an inference
  • 57:07 - 57:11
    real-time Pipeline and it has done it
  • 57:11 - 57:13
    like it finished it finished the job is
  • 57:13 - 57:18
    finished so uh we can now deploy
  • 57:18 - 57:19
    ment
  • 57:19 - 57:22
    yeah
  • 57:22 - 57:25
    exactly like on time
  • 57:25 - 57:28
    I like it finished like two seconds
  • 57:28 - 57:31
    three three four seconds ago
  • 57:31 - 57:33
    so uh
  • 57:33 - 57:36
    until like um
  • 57:36 - 57:40
    this is my job review so
  • 57:40 - 57:43
    uh like this is the job details that I
  • 57:43 - 57:46
    have already submitted it's just opening
  • 57:46 - 57:48
    and once it opens
  • 57:48 - 57:50
    um
  • 57:50 - 57:53
    like I don't know why it's so heavy
  • 57:53 - 57:57
    today it's not like that usually
  • 57:59 - 58:01
    yeah it's probably because you are also
  • 58:01 - 58:06
    showing your your screen on teams
  • 58:06 - 58:08
    okay so that's the bandwidth of your
  • 58:08 - 58:11
    connection is exactly do something here
  • 58:11 - 58:14
    because yeah finally
  • 58:14 - 58:16
    I can switch to my mobile internet if it
  • 58:16 - 58:19
    did it again so I will click on deploy
  • 58:19 - 58:21
    it's that simple I'll just click on
  • 58:21 - 58:23
    deploy and
  • 58:23 - 58:26
    I am going to deploy a new real-time
  • 58:26 - 58:28
    endpoint
  • 58:28 - 58:30
    so what I'm going to name it I'm
  • 58:30 - 58:32
    description and the compute type
  • 58:32 - 58:34
    everything is already mentioned for me
  • 58:34 - 58:36
    here so I'm just gonna copy and paste it
  • 58:36 - 58:39
    because we like we have we are running
  • 58:39 - 58:41
    out of time
  • 58:41 - 58:46
    so it's all Azure container instance
  • 58:46 - 58:49
    which is a containerization service also
  • 58:49 - 58:51
    both are for containerization but this
  • 58:51 - 58:52
    gives you something and this gives you
  • 58:52 - 58:55
    something else for the advanced options
  • 58:55 - 58:57
    it doesn't say for us to do anything so
  • 58:57 - 59:00
    we are just gonna click on deploy
  • 59:00 - 59:05
    and now we can test our endpoint from
  • 59:05 - 59:08
    the endpoints that we can find here so
  • 59:08 - 59:11
    it's in progress if I go here
  • 59:11 - 59:14
    under the assets I can find something
  • 59:14 - 59:17
    called endpoints and I can find the
  • 59:17 - 59:19
    real-time ones and the batch endpoints
  • 59:19 - 59:22
    and we have created a real-time endpoint
  • 59:22 - 59:25
    so we are going to find it under this uh
  • 59:25 - 59:30
    title so if I like click on it I should
  • 59:30 - 59:33
    be able to test it once it's ready
  • 59:33 - 59:37
    it's still like loading but this is the
  • 59:37 - 59:41
    input and this is the output that we
  • 59:41 - 59:45
    will get back so if I click on test and
  • 59:45 - 59:50
    from here I will input some data to the
  • 59:50 - 59:51
    endpoint
  • 59:51 - 59:55
    which are the patient information The
  • 59:55 - 59:57
    Columns that we have already seen in our
  • 59:57 - 60:00
    data set the patient ID the pregnancies
  • 60:00 - 60:04
    and of course of course I'm not gonna
  • 60:04 - 60:06
    enter the label that I'm trying to
  • 60:06 - 60:08
    predict so I'm not going to give him if
  • 60:08 - 60:11
    the patient is diabetic or not this end
  • 60:11 - 60:13
    point is to tell me this is the end
  • 60:13 - 60:16
    point or the URL is going to give me
  • 60:16 - 60:18
    back this information whether someone
  • 60:18 - 60:23
    has diabetes or he doesn't so if I input
  • 60:23 - 60:25
    these this data I'm just going to copy
  • 60:25 - 60:28
    it and go to my endpoint and click on
  • 60:28 - 60:30
    test I'm gonna give the result pack
  • 60:30 - 60:32
    which are the three columns that we have
  • 60:32 - 60:36
    defined inside our python script the
  • 60:36 - 60:38
    patient ID the diabetic prediction and
  • 60:38 - 60:41
    the probability the certainty of whether
  • 60:41 - 60:46
    someone is diabetic or not based on the
  • 60:46 - 60:51
    uh based on the prediction so that's it
  • 60:51 - 60:54
    and like uh I think that this is really
  • 60:54 - 60:57
    simple step to do you can do it on your
  • 60:57 - 60:58
    own you can test it
  • 60:58 - 61:01
    and I think that I have finished so
  • 61:01 - 61:03
    thank you
  • 61:03 - 61:07
    uh yes we are running out of time I I
  • 61:07 - 61:10
    just wanted to uh thank you John for for
  • 61:10 - 61:12
    this demo for going through all these
  • 61:12 - 61:14
    steps to
  • 61:14 - 61:17
    um create train a classification model
  • 61:17 - 61:20
    and also deploy it as a predictive
  • 61:20 - 61:23
    service and I encourage you all to go
  • 61:23 - 61:25
    back to the learn module
  • 61:25 - 61:28
    um and uh like depend all these topics
  • 61:28 - 61:32
    at your at your own pace and also maybe
  • 61:32 - 61:35
    uh do this demo on your own on your
  • 61:35 - 61:37
    subscription on your Azure for student
  • 61:37 - 61:39
    subscription
  • 61:39 - 61:43
    um and I would also like to recall that
  • 61:43 - 61:46
    this is part of a series of study
  • 61:46 - 61:50
    sessions of cloud skill challenge study
  • 61:50 - 61:51
    sessions
  • 61:51 - 61:54
    um so you will have more in the in the
  • 61:54 - 61:58
    in the following days and this is for
  • 61:58 - 62:00
    you to prepare let's say to to help you
  • 62:00 - 62:05
    in taking the a cloud skills challenge
  • 62:05 - 62:07
    which collect
  • 62:07 - 62:11
    a very interesting learn module that you
  • 62:11 - 62:15
    can use to scale up on various topics
  • 62:15 - 62:18
    and some of them are focused on AI and
  • 62:18 - 62:21
    ml so if you are interested in these
  • 62:21 - 62:23
    topics you can select these these learn
  • 62:23 - 62:25
    modules
  • 62:25 - 62:28
    um so let me also copy
  • 62:28 - 62:30
    um the link the short link to the
  • 62:30 - 62:33
    challenge in the chat uh remember that
  • 62:33 - 62:35
    you have time until the 13th of
  • 62:35 - 62:38
    September to take the challenge and also
  • 62:38 - 62:40
    remember that in October on the 7th of
  • 62:40 - 62:43
    October you have the you can join the
  • 62:43 - 62:47
    student the the student developer Summit
  • 62:47 - 62:51
    which is uh which will be a virtual or
  • 62:51 - 62:53
    in for some for some cases and hybrid
  • 62:53 - 62:56
    event so stay tuned because you will
  • 62:56 - 62:59
    have some surprises in the following
  • 62:59 - 63:01
    days and if you want to learn more about
  • 63:01 - 63:03
    this event you can check the Microsoft
  • 63:03 - 63:08
    Imaging cap Twitter page and stay tuned
  • 63:08 - 63:11
    so thank you everyone for uh for joining
  • 63:11 - 63:13
    this session today and thank you very
  • 63:13 - 63:16
    much Sean for co-hosting with with this
  • 63:16 - 63:20
    session with me it was a pleasure
  • 63:22 - 63:24
    thank you so much Carlotta for having me
  • 63:24 - 63:27
    with you today and thank you like for
  • 63:27 - 63:28
    giving me this opportunity to be with
  • 63:28 - 63:30
    you here
  • 63:30 - 63:33
    great I hope that uh yeah I hope that we
  • 63:33 - 63:36
    work again in the future sure I I hope
  • 63:36 - 63:38
    so as well
  • 63:38 - 63:41
    um so
  • 63:44 - 63:46
    bye bye speak to you soon
  • 63:46 - 63:49
    bye
Title:
Create a Classification Model with Azure Machine Learning Designer [English]
Description:

more » « less
Video Language:
English
Duration:
01:03:50

English subtitles

Revisions Compare revisions