What is “concept drift” in machine learning?

Last updated on March 27th, 2024

When it is obvious that the goals cannot be reached, don't adjust the goals, adjust the action.

Concept Drift is a term you seldom hear from companies selling AI & Machine learning solutions, but it’s crucial to understand if you want to realize a sustained return on your Machine Learning investment. This is because, like many things you buy, Machine Learning models perform best when they are new, with their performance degrading over time. Of course, a model’s performance does not wear out as a mechanical component might, but the performance does degrade as the input data drifts and the relationships between inputs changes.

Concept Drift Explained

This degradation happens due to two factors: the changes in the data set on which model is trained, a phenomenon known as ‘Data Drift’, and changes in the relationships between the input signals on which the model is based in what is known as ‘Model Decay’.

Data Drift occurs because data is not constant, even in a well-controlled manufacturing environment. Drift occurs for a myriad of reasons, for example, a sensor may be susceptible to variations in temperature, humidity or vibration. For highly sensitive pressure sensors used in leak testing, even slight variations in atmospheric pressure can influence your Machine Learning model’s performance.

Model Decay resulting from shifting relationships between signals over time is harder to quantify, but a change in raw materials or increased tool wear are two of the most common culprits.

These changes move the dataset away from the historical ‘Training’ dataset, i.e., the conditions under which the Machine Learning model was developed. Data Drift causes a gap between the training dataset and the ‘live’ dataset that’s actually used to make predictions. Model decay causes the model’s predictions to drift away from the reality of the production line.

Ultimately, the consequence of Concept Drift resulting from Data Drift, Model Decay, or both is incorrect predictions coming out of the machine learning model, which undermines its business value.

Concept drift

The Problem with Concept Drift​

Concept Drift is rarely spoken about by Industry 4.0 companies keen to sell you on Machine Learning because its solution, typically, is expensive and manually intensive: re-training and deploying a whole new model. This is a problem that you will have to tackle regardless of whether you’re implementing a Machine Learning model from an external vendor or developing internal Machine Learning capabilities.

The only way to maintain value once a Machine Learning model is built out and deployed is to monitor the input data and efficiency of the model’s predictions constantly to ensure continued performance. Specifically, the data must be monitored for signs of drift, and the model must be monitored for signs of decay.

Industry 4.0

Once a drop in performance becomes significant, action must be taken, which means retraining or recreating the model. There are several options to that end. You can:

  1. Do Nothing – Accept that the accuracy of the predictions will fall away with time. Obviously, this is unlikely to be acceptable to companies that have already invested significant time and capital into Machine Learning solutions.
  2. Re-fit the Model – Retrain your outdated model on the most recent dataset coming in. In theory, that should enable better predictions to be made based on an updated pattern in the new dataset. While this can be effective as a response to Data Drift, it does not compensate for Model Decay.
  3. Retrain the Model – Train and deploy a new model when the previous model’s performance drops below an acceptable level. While this method is the most effective, updating and training a new model, as well as its deployment, can take significant time.

There are pros and cons with each option, but the basic trade-off is between model performance and manual labour: the more you want from the former, the more you need of the latter.

Survival of the Fittest in the Colosseum​

Colosseum Animated

The problem with Machine Learning is that it’s about creating a mathematical model of the world, and the world tends not to be cooperative.

Acerta recognised this problem early on and developed a solution we call Colosseum to deal with it. While this started as a benchmarking tool to compare the performance of different models, our data scientists and engineers quickly realized its potential for solving the challenge of Concept Drift in the real world.

The Colosseum is part of our MLOps methodology. This approach allows us to treat Machine Learning models like any other piece of software: testing, vetting, and deploying them in a reliable way.  Automating as much of that process as possible enables us to do this economically at scale. This is key to the successful widespread adoption of Machine Learning models across the production environment and it enables Acerta’s Data Scientists to keep developing new models, as opposed to maintaining existing deployments.

The Colosseum allows us to test models at scale and return the results, automatically generating benchmarking, hyperparameter optimization, or cross-validation experiments to determine which model is best for the job. The models are continually pitted against each other until the fittest one emerges as the victor for a given application.

And the best part is that because we’re doing this automatically in the cloud, we can have lots of “battles” simultaneously, flexibly scaling the infrastructure up or down on demand. For one deployment we ran 7,000 experiments in a day, an impossible task to do manually.

By adding the ability to monitor data and deploy models if they “win” in the Colosseum, we can trigger a competition whenever there’s a change in client data. Once the best model emerges, it goes through tests for validity, robustness and bias automatically as well as being benchmarked against the currently deployed model. 

If the new model runs better, a decision maker can give it  the final ‘Thumbs Up’ or ‘Thumbs Down’, thereby anointing the victor to be delivered almost instantly, maintaining effectiveness while freeing up valuable data science resources to deliver more value.

Share on social:

Automate root cause analysis and predict defects in real time

How is that possible?