Practice and comparative analysis of various data filling algorithms in time series scenarios

In time series modeling tasks, models are often sensitive to missing data. A large amount of missing data may even cause the trained model to be completely unusable. I also wrote about data filling in my previous blog post. I feel If you are interested, you can read it by yourself:

"Python implements missing data filling based on sliding average idea"

The core purpose of this article is mainly because there is a need for time series prediction modeling in actual projects. It is necessary to prepare and extract early data and process it. Here we consider integrating and implementing various data filling processing algorithms based on some common processing methods. Integration is used in projects.

The instance data looks like this:

01/01/2011,12,6.97,98,40.5,6.36,2.28,0.09,0.17
01/02/2011,11.7,6.97,98,40.8,6.4,2.06,0.09,0.21
01/03/2011,11.4,6.97,93,53.4,6.64,1.81,0.08,0.15
01/04/2011,9.9,6.96,95,33.5,6.39,2.38,0.09,0.2
01/05/2011,9.2,7.01,99,32.2,6.5,2.23,0.08,0.22
01/06/2011,9.9,6.97,98,32.9,6.74,1.74,0.07,0.13
01/07/2011,9.2,6.93,102,22.4,7.02,1.69,0.09,0.19
01/08/2011,9.6,6.97,104,35.1,7.26,1.79,0.07,0.27
01/09/2011,11.9,6.92,103,25.5,6.13,1.61,0.08,0.18
01/10/2011,12.3,6.96,102,30.9,6.66,1.9,0.06,0.08
01/11/2011,10.7,6.99,97,36.1,7.15,3.73,0.09,0.12
01/12/2011,9.3,6.95,97,34.5,7.66,2.33,0.08,0.13
01/13/2011,9.2,6.98,95,42,8.01,3.05,0.1,0.14
01/14/2011,10.5,6.95,98,30.9,7.01,3.05,0.07,0.13
01/15/2011,11.2,6.94,98,27.2,6.61,3.41,0.08,0.13
01/16/2011,9.1,6.93,93,39.6,7.51,3.4,0.12,0.23
01/17/2011,9.1,6.92,96,31.9,7.07,2.97,0.08,0.22
01/18/2011,10,6.93,95,37.6,7.55,2.64,0.08,0.11
01/19/2011,10.8,6.9,99,33.5,7.4,2.96,0.09,0.13
01/20/2011,10.6,6.86,100,31.8,7,3,0.08,0.12
01/21/2011,9.2,6.8,99,32.6,7.32,2.92,0.07,0.07
01/22/2011,9.4,6.76,99,35.8,7.44,3.62,0.12,0.14
01/23/2011,9.9,6.7,97,35.6,7.19,3.35,0.09,0.15
01/24/2011,10,6.66,99,35.9,7.16,3.18,0.07,0.08
01/25/2011,9.7,6.61,98,34.8,7.31,3.27,0.07,0.12
01/26/2011,9.5,6.54,101,33.5,7.08,3.56,0.08,0.21
01/27/2011,9.9,6.51,102,34.8,6.55,3.54,0.08,0.15
01/28/2011,10.4,6.47,98,20.5,6.46,3.38,0.07,0.09
01/29/2011,9.3,6.52,101,29.8,7.39,3.74,0.08,0.13
01/30/2011,8.2,6.53,102,33.8,7.83,3.51,0.08,0.13
01/31/2011,8.7,6.54,101,27.8,7.65,3.3,0.07,0.15
02/01/2011,9.8,6.58,102,31.4,7.02,3.25,0.07,0.11
02/02/2011,9.8,6.63,102,32.5,7.37,3.93,0.09,0.19
02/03/2011,9.9,6.69,102,32,7.27,3.8,0.08,0.14
02/04/2011,11.9,6.72,99,26.5,6.61,3.53,0.07,0.09
02/05/2011,13.7,6.75,97,24.9,6.31,3.37,0.07,0.09
02/06/2011,15.2,6.77,97,26.2,6.04,4.03,0.09,0.14
02/07/2011,16.5,6.76,92,23.2,5.82,3.61,0.07,0.1
02/08/2011,18.3,6.7,89,21.4,4.93,3.93,0.09,0.22
02/09/2011,18.5,6.72,84,17.5,5.33,3.33,0.07,0.1
02/10/2011,18,6.7,85,21.4,5.31,3.71,0.07,0.13
02/11/2011,15,6.72,88,22.1,6.08,3.49,0.06,0.06
02/12/2011,12.8,6.66,84,23.9,7.15,3.52,0.07,0.13
02/13/2011,12.2,6.61,81,26.9,7.39,3.5,0.07,0.11
02/14/2011,10.7,6.57,83,23.8,7.62,3.57,0.08,0.14
02/15/2011,9.5,6.53,84,27.1,7.88,3.53,0.08,0.12
02/16/2011,9.1,6.51,87,35.2,8.35,3.64,0.09,0.17
02/17/2011,9.8,6.46,94,31,7.87,3.38,0.08,0.15
02/18/2011,10.4,6.45,94,35.4,8.13,3.63,0.1,0.22
02/19/2011,10.6,6.39,86,33.5,7.97,3.5,0.1,0.2
02/20/2011,11.3,6.38,88,37,8.41,3.31,0.08,0.11
02/21/2011,12.5,6.37,89,32.1,7.24,3.34,0.08,0.11
02/22/2011,13.2,6.39,87,37.5,8.09,3.93,0.12,0.12
02/23/2011,14.6,6.4,89,25.6,6.87,3.71,0.08,0.14
02/24/2011,15,6.38,87,19.2,6.19,3.6,0.07,0.12
02/25/2011,16.2,6.36,86,19.5,5.57,3.54,0.07,0.13
02/26/2011,16.4,5.61,79,16.8,4.19,3.68,0.07,0.17
02/27/2011,8.9,2.54,29,15,2.42,3.29,0.07,0.09
02/28/2011,23,6.29,86,26.4,5.45,3.85,0.09,0.12
03/01/2011,22.4,6.43,92,27.4,5.71,1.78,0.07,0.13
03/02/2011,17.5,6.33,89,30.2,6.68,2.2,0.07,0.11
03/03/2011,15.4,6.36,91,29.8,7.01,1.97,0.07,0.07
03/04/2011,13.6,6.31,89,29,7.48,1.81,0.07,0.08
03/05/2011,13.2,6.3,92,25.9,6.89,2.54,0.07,0.1
03/06/2011,13.9,6.3,99,29.2,7.16,1.83,0.06,0.08
03/07/2011,14.4,6.27,98,26,7.05,1.62,0.05,0.07
03/08/2011,14.2,6.25,100,30.1,7.21,1.47,0.06,
03/09/2011,14.6,6.2,102,29.5,7.02,1.46,0.06,
03/10/2011,15.2,6.16,105,24.1,6.69,1.57,0.05,0.28
03/11/2011,15.2,6.13,107,32.5,6.78,1.74,0.07,0.43
03/12/2011,14.4,6.1,105,28.1,7.24,1.64,0.06,0.09
03/13/2011,15.2,6.05,102,27,6.97,1.73,0.06,0.09
03/14/2011,18,6,102,26.5,6.34,1.92,0.06,0.1
03/15/2011,19.5,5.99,99,26.4,6.14,2.17,0.06,0.1
03/16/2011,15.1,6.15,111,32.5,7.01,2.83,0.08,0.13
03/17/2011,14.6,6.33,118,33.2,7.25,2.44,0.07,0.06
03/18/2011,14.6,6.38,122,30.1,7.34,2.88,0.08,0.11
03/19/2011,13.5,6.35,124,32.4,7.66,2.69,0.09,
03/20/2011,15.6,6.26,108,53.9,6.79,2.9,0.14,
03/21/2011,20.8,6.17,95,44.2,5.2,2.31,0.1,
03/22/2011,19.9,6.23,99,47.8,6.87,3.09,0.12,0.11
03/23/2011,15.3,6.31,112,48.2,8.74,2.47,0.11,0.07
03/24/2011,14.6,6.22,114,43.5,8.95,2.68,0.13,0.13
03/25/2011,15.8,6.2,113,32.9,8.6,2.63,0.12,0.08
03/26/2011,15.7,6.16,119,35.6,8.97,2.51,0.1,0.06
03/27/2011,12.7,5.35,108,31.8,8.95,2.21,0.09,0.05
03/28/2011,14.2,6.05,126,25.7,6.67,2.23,0.06,
03/29/2011,,,,,,,,
03/30/2011,,,,,,,,
03/31/2011,,,,,,,,
04/01/2011,0.25,0.25,0.25,0.25,0.25,,0.08,0.51
04/02/2011,7.8,6.36,39,8.4,3.83,0.25,0.03,0.19
04/03/2011,17.2,7.56,147,77.8,8.7,1.13,0.08,0.2
04/04/2011,11.9,7.29,148,56.5,6.06,1.99,0.07,0.28
04/05/2011,14.9,7.12,181,96.6,6.15,2.38,0.08,0.44
04/06/2011,15.5,7.12,189,75.3,6.07,2.43,0.08,0.45
04/07/2011,16.3,7.12,199,13.8,5.53,2.46,0.07,0.38
04/08/2011,16.4,7.19,192,124.7,4.61,2.37,0.08,0.17
04/09/2011,16.3,7.1,198,286.6,5.19,2.62,0.07,0.17

It can be seen that there are obvious missing values ​​in the data set sequence, as shown below:

First, let’s look at the most basic filling processing method, which is zero value filling. The core implementation is as follows:

SI = SimpleImputer(missing_values=np.nan, strategy="constant",fill_value=0) 
result = SI.fit_transform(data)

This method is of course also the least recommended.

Next let’s look at the mean filling method:

SI = SimpleImputer(missing_values=np.nan, strategy='mean') 
result = SI.fit_transform(data)

The above two filling processes are implemented based on the SimpleImputer method built into the sklearn module. The parameter details of this method are as follows:

class sklearn.impute.SimpleImputer(*, missing_values=nan, strategy=‘mean’, fill_value=None, verbose=0, copy=True, add_indicator=False)

Parameter meaning
missing_values: int, float, str, (default) np.nan or None, that is, what the missing value is.
strategy: Null value filling strategy, four options in total (default) mean, median, most_frequent, constant. mean means that the missing values ​​of the column are filled by the mean of the column. median is the median and most_frequent is the mode. constant means filling the empty value with a custom value, but this custom value must be defined through fill_value.
fill_value: str or numerical value, default is Zone. When strategy == "constant", fill_value is used to replace all occurrences of missing_values. fill_value is Zone. When processing numerical data, missing values ​​(missing_values) will be replaced with 0. For string or object data types, they will be replaced with the string "missing_value".
verbose: int, (default) 0, controls the verbosity of the imputer.
copy: boolean, (default) True, which means processing a copy of the data, False to modify the data in place.
add_indicator: boolean, (default) False, True will add n columns of data of the same size composed of 0 and 1 after the data. 0 means that the position is not a missing value, and 1 means that the position is a missing value.
Following my method above, you can also build several data filling methods based on median, mode and custom constants, as shown below:

#中位数
SI = SimpleImputer(missing_values=np.nan, strategy='median') 
result = SI.fit_transform(data)


#众数
SI = SimpleImputer(missing_values=np.nan, strategy='most_frequent') 
result = SI.fit_transform(data)


#自定义常量值
SI = SimpleImputer(missing_values=np.nan, strategy='constant') 
result = SI.fit_transform(data)

In addition to these filling methods based on sklearn's built-in statistical methods, you can also fill based on models. The essential idea is to first select the dimensions that are easiest to fill for filling, and then process them in a loop. Here is the basic code implementation:

sortInds = np.argsort(X.isnull().sum(axis=0)).values   
for i in sortInds:
    df = X
    fillc = df.iloc[:,i]
    df = df.iloc[:,df.columns != i]
    dfs =SimpleImputer(missing_values=np.nan,strategy='constant',fill_value=0).fit_transform(df)
    Ytrain = fillc[fillc.notnull()] 
    Ytest = fillc[fillc.isnull()] 
    Xtrain = dfs[Ytrain.index,:] 
    Xtest = dfs[Ytest.index,:]
    model.fit(Xtrain, Ytrain)
    Ypredict = model.predict(Xtest)
    X.loc[X.iloc[:,i].isnull(),i] = Ypredict

Next is the idea of ​​data filling of sliding average. You can see the implementation of this part of the suggestion in the previous blog post. It is more specific and detailed and will not be expanded here. The data filling strategy of sliding average mainly includes: average method and weighted average method. The only The difference is that weight processing is added to the mobile weighting processing method.

Let’s compare the differences:

#平均
one_index_list=list(range(i-tmp,i))+list(range(i+1,i+tmp+1))
one_value=[data[h] for h in one_index_list]
one_value=[O for O in one_value if not math.isnan(O)]
one_value=[new_col_list[h] for h in one_index_list]
one_value=[O for O in one_value if not math.isnan(O)]
new_col_list[i]=sum(one_value)/len(one_value)



#加权
one_index_list=list(range(i-tmp,i))+list(range(i+1,i+tmp+1))
one_value=[one_col_list[h] for h in one_index_list]
weight_list=[abs(1/(B-i)) for B in range(i-tmp,i) if not math.isnan(one_col_list[B])]+[abs(1/(L-i)) for L in range(i+1,i+tmp) if not math.isnan(one_col_list[L])]
one_w=weightGenerate(weight_list)
one_weight_value=[one_value[j]*one_w[j] for j in range(len(one_w)) if not math.isnan(one_value[j])]
new_col_list[i]=sum(one_weight_value)

The last one is the data filling method of Kalman filtering. Here I mainly implement it based on the open source module pykalman. It is very simple. There are many examples on the Internet. If you are interested, you can study it yourself.

After completing the development of different types of data filling methods, we take actual data as an example to compare the effects of filling:

There are 8 dimensions of feature data in our data set. We use the above-mentioned different data filling algorithms to fill the original data set. It can be seen that the differences between different filling algorithms are quite obvious.

The amount of data is relatively large, and the view may not be realistic enough. Here, the data set is thinned out 100 times to see the comparison visualization effect, as shown below:

The data here becomes very sparse. Next, we encrypt it 10 times, and then look at the comparative visualization effect of the filling algorithm, as shown below:

 

Guess you like

Origin blog.csdn.net/Together_CZ/article/details/132862789