Tag Archive for: Sensor Data

Cloud Data Platform for Shopfloor Management

How Cloud Data Platforms improve Shopfloor Management

In the era of Industry 4.0, linking data from MES (Manufacturing Execution System) with that from ERP, CRM and PLM systems plays an important role in creating integrated monitoring and control of business processes.

ERP (Enterprise Resource Planning) systems contain information about finance, supplier management, human resources and other operational processes, while CRM (Customer Relationship Management) systems provide data about customer relationships, marketing and sales activities. PLM (Product Lifecycle Management) systems contain information about products, development, design and engineering.

By linking this data with the data from MES, companies can obtain a more complete picture of their business operations and thus achieve better monitoring and control of their business processes. Of central importance here are the OEE (Overall Equipment Effectiveness) KPIs that are so important in production, as well as the key figures from financial controlling, such as contribution margins. The fusion of data in a central platform enables smooth analysis to optimize processes and increase business efficiency in the world of Industry 4.0 using methods from business intelligence, process mining and data science. Companies also significantly increase their enterprise value with the linking of this data, thanks to the data and information transparency gained.

Cloud Data Platform for shopfloor management and data sources such like MES, ERP, PLM and machine data.

Cloud Data Platform for shopfloor management and data sources such like MES, ERP, PLM and machine data. Copyright by DATANOMIQ.

If the data sources are additionally expanded to include the machines of production and logistics, much more in-depth analyses for error detection and prevention as well as for optimizing the factory in its dynamic environment become possible. The machine sensor data can be monitored directly in real time via respective data pipelines (real-time stream analytics) or brought into an overall picture of aggregated key figures (reporting). The readers of this data are not only people, but also individual machines or entire production plants that can react to this data.

As a central data architecture there are dozens of analytical applications which can be fed with data:

OEE key figures for Shopfloor reporting
Process Mining (e.g. material flow analysis) for manufacturing and supply chain.
Detection of anomalies on the shopfloor or on individual machines.
Predictive maintenance for individual machines or entire production lines.

This solution scales completely automatically in terms of both performance and cost. It looks beyond individual problems since it offers universal and flexible scope for action. In other words, it will result in a “god mode” for the management being able to drill-down from a specific client project to insights into single machines involved into each project.

Are you interested in scalable data architectures for your shopfloor management? Or would you like to discuss a specific problem with us? Or maybe you are interested in an individual data strategy? Then get in touch with me! 🙂

Predictive maintenance in Semiconductor Industry: Part 1

The process in the semiconductor industry is highly complicated and is normally under consistent observation via the monitoring of the signals coming from several sensors. Thus, it is important for the organization to detect the fault in the sensor as quickly as possible. There are existing traditional statistical based techniques however modern semiconductor industries have the ability to produce more data which is beyond the capability of the traditional process.

For this article, we will be using SECOM dataset which is available here.  A lot of work has already done on this dataset by different authors and there are also some articles available online. In this article, we will focus on problem definition, data understanding, and data cleaning.

This article is only the first of three parts, in this article we will discuss the business problem in hand and clean the dataset. In second part we will do feature engineering and in the last article we will build some models and evaluate them.

Problem definition

This data which is collected by these sensors not only contains relevant information but also a lot of noise. The dataset contains readings from 590. Among the 1567 examples, there are only 104 fail cases which means that out target variable is imbalanced. We will look at the distribution of the dataset when we look at the python code.

NOTE: For a detailed description regarding this cases study I highly recommend to read the following research papers:

  •  Kerdprasop, K., & Kerdprasop, N. A Data Mining Approach to Automate Fault Detection Model Development in the Semiconductor Manufacturing Process.
  • Munirathinam, S., & Ramadoss, B. Predictive Models for Equipment Fault Detection in the Semiconductor Manufacturing Process.

Data Understanding and Preparation

Let’s start exploring the dataset now. The first step as always is to import the required libraries.

import pandas as pd
import numpy as np

There are several ways to import the dataset, you can always download and then import from your working directory. However, I will directly import using the link. There are two datasets: one contains the readings from the sensors and the other one contains our target variable and a timestamp.

# Load dataset
url = "https://archive.ics.uci.edu/ml/machine-learning-databases/secom/secom.data"
names = ["feature" + str(x) for x in range(1, 591)]
secom_var = pd.read_csv(url, sep=" ", names=names, na_values = "NaN") 


url_l = "https://archive.ics.uci.edu/ml/machine-learning-databases/secom/secom_labels.data"
secom_labels = pd.read_csv(url_l,sep=" ",names = ["classification","date"],parse_dates = ["date"],na_values = "NaN")

The first step before doing the analysis would be to merge the dataset and we will us pandas library to merge the datasets in just one line of code.

#Data cleaning
#1. Combined the two datasets
secom_merged = pd.merge(secom_var, secom_labels,left_index=True,right_index=True)

Now let’s check out the distribution of the target variable

secom_merged.target.value_counts().plot(kind = 'bar')

Figure 1: Distribution of Target Variable

From Figure 1 it can be observed that the target variable is imbalanced and it is highly recommended to deal with this problem before the model building phase to avoid bias model. Xgboost is one of the models which can deal with imbalance classes but one needs to spend a lot of time to tune the hyper-parameters to achieve the best from the model.

The dataset in hand contains a lot of null values and the next step would be to analyse these null values and remove the columns having null values more than a certain percentage. This percentage is calculated based on 95th quantile of null values.

#2. Analyzing nulls
secom_rmNa.isnull().sum().sum()
secom_nulls = secom_rmNa.isnull().sum()/len(secom_rmNa)
secom_nulls.describe()
secom_nulls.hist()

Figure 2: Missing percentge in each column

Now we calculate the 95th percentile of the null values.

x = secom_nulls.quantile(0.95)
secom_rmNa = secom_merged[secom_merged.columns[secom_nulls < x]]

Figure 3: Missing percentage after removing columns with more then 45% Na

From figure 3 its visible that there are still missing values in the dataset and can be dealt by using many imputation methods. The most common method is to impute these values by mean, median or mode. There also exist few sophisticated techniques like K-nearest neighbour and interpolation.  We will be applying interpolation technique to our dataset. 

secom_complete = secom_rmNa.interpolate()

To prepare our dataset for analysis we should remove some more unwanted columns like columns with near zero variance. For this we can calulate number of unique values in each column and if there is only one unique value we can delete the column as it holds no information.

df = secom_complete.loc[:,secom_complete.apply(pd.Series.nunique) != 1]

## Let's check the shape of the df
df.shape
(1567, 444)

We have applied few data cleaning techniques and reduced the features from 590 to 444. However, In the next article we will apply some feature engineering techniques and adress problems like the curse of dimensionality and will also try to balance the target variable.

Bleiben Sie dran!!