Once a predictive model is built, tested and validated, you can easily deploy and use any machine learning model using MLaaS as a REST Web Service on a physical or a virtual compute host on which there is an available Proactive Node. This will be particularly useful for engineering or business teams that want to take advantage of this model. The life cycle of any MLaaS instance (i.e., from starting the generic service instance, deploying an AI specific model to pausing or deleting the instance) can be managed in three different ways in MLOS that will be described in this tutorial.
In this tutorial, you will get introduced to managing MLaaS instances in MLOS via:
1 Management of a MLaaS Instance Using the Studio Portal
Using the Studio Portal of ProActive, we are able to manage the life cycle of a MLaaS instance i.e. starting a generic service instance, deploying an AI specific model to pausing or deleting the instance.
2 Management of a MLaaS Instance Using the Cloud Automation Portal
MLaaS instance lifecycle can be also managed using the Cloud Automation portal by following the steps below:
3 Management of a MLaaS Instance Using the Swagger UI
Once the Cloud Automation service is launched and running using the Cloud Automation Portal, click on the maas-gui endpoint. In the Audit & Traceability page, click on the link provided in the top of the page to access the Swagger UI. Using the Swagger UI, a user is able to deploy machine learning models. Consequently, the deployed models are used for calling for predictions. click on the link provided in the top of the page to access the Swagger UI. Using the Swagger UI, a user is able to deploy a machine learning model as a service. When ML model is deployed as a service, it can be called to apply some predictions for input datasets. In the Swagger UI, you can find several actions to manage a model service instance.