Production Engineer Dashboard & Workflow

_images/Workflow_Block_Diagram.png

The Production Engineer (PE) takes the models that are optimized and published by the ML Engineer and deploys them for live inference after validating them and comparing them to the existing inference models.

The PE operates on the Serving Cluster.

_images/Prod_Eng_Menu.png

Menu Item

Function

Model Catalog

Optimized models, ready for deployment

Inferences

Deployed inferences

The Model Catalog is the catalog described in the section Publish Model.

Deployment Workflow

When the ML Eng publishes a model to the Model Catalog, it will show up on the Serving cluster as well. The PE will deploy the model through the icon on the far right side of the screen under “Actions”

_images/Prod_Eng_Model_Catalog.png

Deploying the model will create a serving endpoint on the cluster. This endpoint can be used to:

  • Test the model using inference data to ensure that it meets the goals

  • Compare the results to the existing live inference model

  • Provide the endpoint for live serving if it meets the goals

_images/Prod_Eng_Inferences.png