What is anomaly detection in manufacturing?

Last updated on November 13th, 2023

Anomaly detection is the process of applying advanced statistical methods data, with the goal of identifying rare events, patterns, and outliers that differ significantly from a dataset’s normal behavior.

Leading manufacturers are implementing Industry 4.0 initiatives that use anomaly detection to find outliers in their data that could affect product quality and production operations. By gaining better visibility into variations into their processes, corrective action can be made more swiftly and effectively. More and more, anomaly detection is being relied on as a more powerful complement to Statistical Process Control (SPC).

Anomaly detection in a crowd
Can you spot the anomaly in this photo?

Data collection and anomaly detection

Industry 4.0 has led to a dramatic increase in the number of tools available to help companies collect, measure, and manage data from all aspects of their operations. The data can include an enormous amount of information from across the manufacturing line. This dataset encompasses data patterns that indicate normal operations, and any unforeseen change to these patterns is interpreted as an anomaly.  

Why detect anomalies in manufacturing?

In a manufacturing context, anomalies are simply variations from the norm. They can have both positive and negative implications. For example, anomalous data can reveal problems, like a technical error on the production line. An anomaly can also represent an opportunity, by revealing a way to improve a manufacturing process.   

A process variation that shows up an an anomaly in the data can be indicative that a defect has occurred, or is likely to occur. By monitoring these events in the data, manufacturers can learn important information that can help them control quality and prevent or investigate defects.

When analyzing data from the production line, it is important to look not only for changes in patterns and outliers; no change can also indicate an anomaly if a change was expected in the normal behavior of the data.

Detecting anomalies with machine learning

Anomaly detection is a very complex process. To analyze the massive amounts of data generated in manufacturing, powerful analytical capabilities are necessary.

Machine learning enables manufacturers to leverage the wealth of data they collect, giving them the insights they need to identify patterns, detect anomalies, and pinpoint outliers. The three key machine learning anomaly detection methods are: 

  1. Unsupervised: Unsupervised anomaly detection, enabled by machine learning, uses unlabeled data to uncover patterns in a dataset, including identifying anomalies and trends across a supply chain, or interpreting historical and current sensor data to predict and prevent equipment downtime.
  1. Supervised: Supervised anomaly detection models use machine learning to analyze datasets that have been labeled as normal or abnormal, to solve well-defined problems, such as predicting and identifying product defects based on known (labelled) failure data.
  1. Semi-supervised: Semi-supervised anomaly detection is a combination of the above and uses both labelled and unlabelled data. For example, in the automotive industry it can use tested (labelled) parts data and compare it to untested (unlabelled) parts data to predict which untested parts will fail.

Because labelling data is such a painstaking exercise, most manufacturers generally work with unlabelled or partially labelled data. A combination of the methods can be used, depending on the type of data, when it was captured, where it was captured, and the application.  

 

Anomaly detection in time-series data 

Time-series data are observations that are recorded in a sequence of values over time. Each data point is timestamped when it was measured, and a value is allocated to it at the time it was recorded. This data is used to forecast expected anomalies within the data collected, and uncover outliers in the extreme data points within the dataset. Manufacturing process data falls into this category.

Anomalies within a time-series dataset are divided into three main categories: 

  1. Global outliers: Also known as point anomalies, global outliers exist outside the entirety of a dataset. They are the data points that deviate the most from other data within a given dataset.Global anomaly
    For example, if a control chart displays a data point outside the specified range, this can be considered a global outlier.

  2. Contextual outliers: Also known as conditional outliers, these anomalies differ greatly from other data within the same dataset, based on a specific context or unique condition.
    Contextual Anomaly
    An example of a contextual outlier can be illustrated on a series of temperature readings collected over time that follow a predictable pattern. A machine alternately heats up and cools in a cyclical manner as it operates. If the machine cools when it is expected to heat, this represents an anomaly, even though the unexpected cooling falls within the expected range of overall temperature.
  1. Collective outliers: Data points that deviate significantly from the rest of the dataset. On their own they are not necessarily outliers but combined with another time series dataset they collectively act like outliers.
    Collective anomaly
    Collective outliers can be more challenging to spot, since they require comparing similar points across different datasets. An example of when a collective outlier could be found it the comparison of power supply data for different machines. If the power for the whole plant were to falter, a dip would be seen across all machine data at the same time. This finding would clearly indicate an issue with the central power supply, as opposed to a specific machine.

How does anomaly detection differ from SPC?

To put it simply, anomaly detection is a more advanced method of detecting potential issues in production data than SPC. But that doesn’t mean that it renders SPC obsolete. Both methods can be applied to manufacturing data either together or separately to help control quality.

The main differences are in the limitations in which each method can be applied. SPC is based on pre-determined, fixed control limits. Anomaly detection is a more fluid and flexible method of discovering inconsistencies on manufacturing data.

We’ve written another post that gets into much more detail about the difference between anomaly detection and SPC.

Ready to detect anomalies in your manufacturing data?

By leveraging the anomaly detection, SPC, and machine learning capabilities of LinePulse, we can help you monitor and analyze your production data in real time to detect anomalies in your manufacturing data and uncover opportunities to improve your part quality. 

Curious how LinePulse can help solve problems on your shop floor? Get in touch. 

Share on social: