Foundations Orbit

Orbit Platform Support Python Support Follow Dessa

Managing performance of machine learning models in production

Business operations, services, and products across all industries are becoming increasingly powered by machine learning systems. Once you have developed a model and deployed it into a production system, you need to monitor and manage its performance to ensure your model does not degrade over time. However, managing the performance of machine learning models after deployment poses new challenges that can't be solved using traditional IT performance management tools or ad-hoc processes.

Managing machine learning models is hard due to several common challenges:

  • Population and concept drift. Your models are trained to learn the relationship between input and output in historical datasets, which you can then use to make inference on future, live data. However, many real world factors such as changes in processes or user/customer behaviour can affect future data that your models predict on. This includes changes in the distribution or domain of future input data and changes in the relationship between input and output. When this happens, your model performance can deteriorate.

  • Broken data pipeline upstream. Machine learning model training and inference often rely on large amounts of data which are loaded from different sources and managed by different parties, which get updated regularly at different frequencies. Unexpected changes, such as the addition of new product code to a column, can significantly affect the performance of live machine learning models.

  • Lack of visibility in performance. Getting value out of machine learning initiatives is a team sport. How do you monitor the performance of your models and communicate with relevant stakeholders, many of whom may not be technical?

  • Issues can go undetected. Many of the issues above originate outside of the control of the data science teams that develop the models and can happen at any time post deployment. These issues often happen silently and your models will still run and produce predictions that people will act upon, leading to bad outcomes. Infrequent ad-hoc performance check-ups and analyses mean that issues can potentially go undetected for months.

What is Orbit?

Orbit was built by our machine learning engineers to dramatically improve the monitoring of model performance in production. Orbit consists of a few core concepts:

  • Data Contracts - A baseline of your dataset which guides various data quality tests to validate new data.

  • Orbit Monitor - a set of data quality tests or metrics to be evaluated on a specific schedule.

  • GUI - A web interface to view the validation report of a Data Contract.

  • Orbit SDK & CLI - A Python SDK and command-line interface for Orbit to enable creation of Data Contracts and Monitors to schedule the execution of Data Contracts.

  • Built-in Scheduler - Used to schedule the cadence of Monitors to validate Data Contracts against incoming data on an on-going basis.

How would you use Orbit?

A typical usage pattern would look like this:

Baseline dataset for validation of future, live data

Create baselines for data in different points of your machine learning pipeline (e.g. using your training datasets) using Data Contracts. Data Contracts can be easily generated to summarize expected statistics and tests that can be applied to validate that future data have not deviated from expectations.

Create monitors to regularly validate data in production

Once you have created a baseline of your data using Data Contracts, create a monitoring script (an Orbit Monitor) to load and validate incoming data against the Data Contract. You are essentially running automated tests on your machine learning data pipeline.

Create monitors to regularly track model and business metrics

You can also define custom functions that compute the performance and business metrics for your machine learning models. Then you can create Orbit Monitors that automatically track the metrics for you over time.

Schedule monitors using the built-in scheduler or your existing scheduler

Orbit Monitors are essentially Python scripts that can be easily triggered using your existing pipeline scheduling tools, such as Airflow or Luigi. Orbit also comes with a built-in scheduler, which you can use to schedule and manage Monitors via our GUI.

View metrics dashboard and validation report on GUI

You can view the results of data quality, metrics, input, concept drift, and more, all from a centralized place in the GUI.

Who does Orbit help?

Whether you are a data scientist currently building a model, a data engineer setting up a pipeline for machine learning models, or a model owner managing multiple projects in a large organization, Orbit can help you simplify your monitoring processes.

Individual Data Scientists can use Orbit's Data Contract feature to capture characteristics of their training datasets, which can be delivered along with code and models to production engineers to deploy into production. Features in Data Contract can also allow them to better understand and check their datasets during training time.

Machine Learning Engineers can use Orbit to add light-weight Monitors to quality control the machine learning models they are putting into production and stop writing custom, ad-hoc scripts to monitor data quality issues, input & concept drift, and model metrics.

Data Engineers can use Orbit to add light-weight monitors to quality control their data pipeline for machine learning models with little additional code. This allows them to detect early if live data in the pipeline are at risk of causing model performance degradation.

Model Owners / Managers can use Orbit to keep a pulse on all machine learning projects in a centralized place, without going back and forth with data scientists and machine learning engineers.

Business Sponsors for ML initiatives can use Orbit to have an executive view on the business impact of the machine learning initiatives, without lengthy delay from manual processes.

Next Steps

Check out our walkthrough to dive into using Orbit.

You can also check out our SDK & CLI reference documentation to find further information.

To learn more about the entirety of the Orbit platform, including Orbit Enterprise Edition, and how we can accommodate your needs, please reach out to us here.


Copyright (C) DeepLearning Financial Technologies Inc. - All Rights Reserved

Unauthorized copying, distribution, reproduction, publication, use of this library, via any medium is strictly prohibited. Proprietary and confidential.