Machine Learning Runtime Challenges (Model-Ops)

Hany Hossny, PhD
4 min readMar 3, 2022

Once the ML model is deployed to production, many issues occur in runtime reducing the predictive accuracy and shortening the lifetime. Without monitoring, the model can easily hit any of the runtime bombs and fail due to data drift, concept drift, anomalies, etc.

Model-Ops performs continuous monitoring for the deployed model to preemptively detect any issues and fix them before they cause any troubles

Data Drift

Data drift happens when the input data stream follows a pattern different from the original pattern in the historical data that was used to train the model. It is also called feature drift, population, or covariate shift. Usually, data drift happens with temporal data, non-stationary data, heteroskedastic data and graph-based data. It means that the features (independent variables) used to build the model have changed, which means that the model or the equations that predict the target variable will have input other than the expected, which will lead to a prediction other than what it was supposed to be.

When a normally distributed data drifts to be bi-modal distributed, the current model is will not work as accurate as it should be

Concept Drift

Concept drift is the deviation of the target variable from its old pattern, it happens when the statistical features of the target variable change in the runtime. Considering that the trained model is a function mapping the input…

--

--

Hany Hossny, PhD
Hany Hossny, PhD

Written by Hany Hossny, PhD

An AI/ML enthusiast, academic researcher, and lead scientist @ Catch.com.au Australia. I like to make sense of data and help businesses to be data-driven

No responses yet