../../assets/image/logo-proactive-machine-learning.png

Once a predictive model is built, tested and validated, you can easily deploy and use any machine learning model using MLaaS as a REST Web Service on a physical or a virtual compute host on which there is an available Proactive Node. This will be particularly useful for engineering or business teams that want to take advantage of this model. The life cycle of any MLaaS instance (i.e., from starting the generic service instance, deploying an AI specific model to pausing or deleting the instance) can be managed in three different ways in MLOS that will be described in this tutorial.

In this tutorial, you will get introduced to managing MLaaS instances in MLOS via:

  1. The Studio Portal and more specifically the bucket model-as-a-service where specific generic tasks are provided to process all the possible actions (i.e. Model_Service_Start, Deploy_AI_Model, Call_Predicition_Service, Model_Service_Actions).
  2. The Cloud Automation Portal by executing the different actions associated to MLaaS (i.e. Deploy_Model_Service, Update_Model_Service, Pause_Model_Service, Resume_Model_Service, Finish_Model_Service).
  3. The Swagger UI which is accessible once the PCA service is up and running.

1 Management of a MLaaS Instance Using the Studio Portal

Using the Studio Portal of ProActive, we are able to manage the life cycle of a MLaaS instance i.e. starting a generic service instance, deploying an AI specific model to pausing or deleting the instance.

  1. Open ProActive Workflow Studio home page.
  2. Click in the View menu field.
  3. Click on Add Bucket Menu to the Palette.
  4. Choose model_as_a_service.
  5. Once the Model As A Service bucket appears, click on it and drag and drop IRIS_Deploy_Predict_Flower_Classifier_Model into the Studio. This workflow contains a pre-built model trainer, that starts from loading the IRIS dataset for flowers, splitting it into training and testing datasets, and finally using these datasets to train the model using one of the classification techniques (in this case: Support Vector Machines). The workflow also includes 4 MLaaS tasks representing its life cycle: Start_Model_Service, Deploy_AI_Model, Call_Prediction_Service, Model_Service_Action. For more information about the characteristics and features of the MLaaS instance tasks and their variables, please check the MLaaS documentation web page.
  6. Click on the Workflow Variables to check the different variables characterizing the overall workflow.
  7. Click on one of the tasks and then click on Task Variables to check the different variables characterizing the chosen task.
  8. Click on the Execute button.
  9. Click on the Secheduling & Orchestration portal.
  10. Once the workflow finishes its execution, you can click on Call_Prediction_Service and preview the predicition results by opening the browser in the Task Preview tab.

2 Management of a MLaaS Instance Using the Cloud Automation Portal

MLaaS instance lifecycle can be also managed using the Cloud Automation portal by following the steps below:

  1. Open ProActive Cloud Automation home page.
  2. In the Service Activation tab, search for Model_Service and click on it.
  3. A window with several variables will appear. In order to run the serive, you need to set some variables as follows:
    For example:
    • INSTANCE_NAME: In this variable, you provide a name for the instance to be launched
    • DRIFT_ENABLED: If True, any drift in the data that exceeds a specific threshold (specified in DRIFT_THRESHOLD) will be detected and the user will be informed.
    For more information about the variables, please visit the MLaaS documentation web page.
    Click on the Execute Action button to start the service.
  4. The started model service instance will appear in the Activated Services with a Current State as Running.
  5. Under Actions, you will find a drop list of actions that can be applied on the running model service instance.
  6. Click on Deploy_Model_Service_cloud_automation action and then click on the Execute button just beside it.
  7. A window with different variables will appear. In order to deploy your own trained model, set the following variables as follows:
    For example:
    • MODEL_URL: https://activeeon-public.s3.eu-west-2.amazonaws.com/models/model.pkl. This is the URL where the trained model can be found.
    • MODEL_METADATA: [[5.8216666667,3.0658333333,3.695,1.1766666667],[0.8128364419,0.4385797999,1.7614380107,0.7581194484]]. This is a JSON variable containing some statistical measures. In this tutorial, we will use the the mean and the standard deviation extracted from the training data used to build the model that will be deployed. This variable helps in detecting the drifts that may occur in the data.
    • USER_NAME: user. A valid username should be provided in order to obtain a token that enables the deployment of the model.
    For more information about these variables, please visit the MLaaS documentation web page. Click on the Execute Action button to start the service.
  8. Once the model service instance is succesfully deployed, click on maas-gui to view the Audit & Traceability page. In this page, you can check the different variables of the instance and examine its traceability throughout different date/times(s).
  9. Click on the link Click here to visualize the predictions above, to visualize the trained model predictions. This link only appears in case you have chosen True for the LOGGING_PREDICTION variable before the execution (which is the default value).
  10. In the drop list of Actions, there is Update_Model_Service_cloud_automation action that will update the deployed instance according to the updated variables. There is also the Pause_Model_Service_cloud_automation action will pause the service instance. In addition, there is the Finish_Model_Service_cloud_automation action will finish and delete the service instance.

3 Management of a MLaaS Instance Using the Swagger UI

Once the Cloud Automation service is launched and running using the Cloud Automation Portal, click on the maas-gui endpoint. In the Audit & Traceability page, click on the link provided in the top of the page to access the Swagger UI. Using the Swagger UI, a user is able to deploy machine learning models. Consequently, the deployed models are used for calling for predictions. click on the link provided in the top of the page to access the Swagger UI. Using the Swagger UI, a user is able to deploy a machine learning model as a service. When ML model is deployed as a service, it can be called to apply some predictions for input datasets. In the Swagger UI, you can find several actions to manage a model service instance.

  1. Open the Swagger home page by clicking on the second link on the top of the opened endpoint page.
  2. Start by clicking on the get_token section to obtain a token for your service.
  3. Click on the Try it out! button. The token ID will appear in the Response Body subsection. Copy this token ID.
  4. Click on the deploy section, choose you machine learning model file to be uploaded in model_file, then paste the copied token ID in api_token and click on Try it out! to deploy the model.
  5. If your model is already deployed using the Cloud Automation Portal, go to the predict section.
  6. Click on the Model | Example Value section.
  7. The information will appear in the data section. Paste the token ID in the api_token and click on Try it Out!
  8. The predictions will appear in the Response Body section.
  9. There are several actions that can be applied using the Swagger such as listing all the deployed models, redeploying a specific model, undeploying a model, etc...
  10. For more information about the Swagger UI, please visit the MLaaS documentation web page.
When done with this tutorial, you can move on to: