【天池】心跳信号分类预测 时间序列特征 Part2

1 写在前面:

加入时间序列特征后,先比之前的baseline的损失降低了不少
但是时间序列特征的效果并没有想的那么中那么重要
可能存在一下原因:
1) Tsfresh中生存的时间序列特征,并不是我们想要的特征
2) 模型可能过拟合,导致得分不理想

2 导入包并读取数据

# 包导入
import pandas as pd
import numpy as np
import tsfresh as tsf
import pickle
from tsfresh import extract_features, select_features
from tsfresh.utilities.dataframe_functions import impute
import lightgbm as lgb

from sklearn.model_selection import StratifiedKFold,KFold
from sklearn.metrics import log_loss
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import OneHotEncoder

import matplotlib.pyplot as plt
import time
import warnings
warnings.filterwarnings('ignore')
# 数据读取
data_train = pd.read_csv("train.csv")
data_test_A = pd.read_csv("testA.csv")

print(data_train.shape)
print(data_test_A.shape)
(100000, 3)
(20000, 2)

3 数据预处理

# 对心电特征进行行转列处理,同时为每个心电信号加入时间步特征time
train_heartbeat_df = data_train["heartbeat_signals"].str.split(",", expand=True).stack()
train_heartbeat_df = train_heartbeat_df.reset_index()
train_heartbeat_df = train_heartbeat_df.set_index("level_0")
train_heartbeat_df.index.name = None
train_heartbeat_df.rename(columns={
    
    "level_1":"time", 0:"heartbeat_signals"}, inplace=True)
train_heartbeat_df["heartbeat_signals"] = train_heartbeat_df["heartbeat_signals"].astype(float)

train_heartbeat_df
time heartbeat_signals
0 0 0.991230
0 1 0.943533
0 2 0.764677
0 3 0.618571
0 4 0.379632
... ... ...
99999 200 0.000000
99999 201 0.000000
99999 202 0.000000
99999 203 0.000000
99999 204 0.000000

20500000 rows × 2 columns

4 使用 tsfresh 进行时间序列特征处理

4.1 特征抽取

Tsfresh(TimeSeries Fresh)是一个Python第三方工具包。 它可以自动计算大量的时间序列数据的特征。此外,该包还包含了特征重要性评估、特征选择的方法,因此,不管是基于时序数据的分类问题还是回归问题,tsfresh都会是特征提取一个不错的选择。

  • 官方文档:Introduction — tsfresh 0.17.1.dev24+g860c4e1 documentation
# 将处理后的心电特征加入到训练数据中,同时将训练数据label列单独存储
data_train_label = data_train["label"]
data_train = data_train.drop("label", axis=1)
data_train = data_train.drop("heartbeat_signals", axis=1)
data_train = data_train.join(train_heartbeat_df)

data_train
id time heartbeat_signals
0 0 0 0.991230
0 0 1 0.943533
0 0 2 0.764677
0 0 3 0.618571
0 0 4 0.379632
... ... ... ...
99999 99999 200 0.000000
99999 99999 201 0.000000
99999 99999 202 0.000000
99999 99999 203 0.000000
99999 99999 204 0.000000

20500000 rows × 3 columns

data_train[data_train["id"]==1]
id time heartbeat_signals
1 1 0 0.971482
1 1 1 0.928969
1 1 2 0.572933
1 1 3 0.178457
1 1 4 0.122962
... ... ... ...
1 1 200 0.000000
1 1 201 0.000000
1 1 202 0.000000
1 1 203 0.000000
1 1 204 0.000000

205 rows × 3 columns

4.2 获取时间序列特征

  • train_features中包含了heartbeat_signals的787种常见的时间序列特征(所有这些特征的解释可以去看官方文档),
  • 但是这其中有的特征可能为NaN值(产生原因为当前数据不支持此类特征的计算),使用以下方式去除NaN值:
from tsfresh import extract_features

# 特征提取
train_features = extract_features(data_train, column_id='id', column_sort='time')

# 导出数据
train_features.to_pickle('./all_data.pkl')
#读入数据
all_features = pd.read_pickle('./all_data.pkl')
display(train_features.shape)
train_features =all_features[:100000]
test_features = all_features[100000:]
data_train_label = pd.read_csv("train.csv")['label']
(100000, 787)

4.3 对特征进行筛选

接下来,按照特征和响应变量之间的相关性进行特征选择,这一过程包含两步:

  • 首先单独计算每个特征和响应变量之间的相关性,
  • 然后利用Benjamini-Yekutieli procedure 进行特征选择,决定哪些特征可以被保留。
from tsfresh import select_features

# 按照特征和数据label之间的相关性进行特征选择
train_features_filtered = select_features(train_features, data_train_label)

train_features_filtered
heartbeat_signals__sum_values heartbeat_signals__fft_coefficient__attr_"abs"__coeff_38 heartbeat_signals__fft_coefficient__attr_"abs"__coeff_37 heartbeat_signals__fft_coefficient__attr_"abs"__coeff_36 heartbeat_signals__fft_coefficient__attr_"abs"__coeff_35 heartbeat_signals__fft_coefficient__attr_"abs"__coeff_34 heartbeat_signals__fft_coefficient__attr_"abs"__coeff_33 heartbeat_signals__fft_coefficient__attr_"abs"__coeff_32 heartbeat_signals__fft_coefficient__attr_"abs"__coeff_31 heartbeat_signals__fft_coefficient__attr_"abs"__coeff_30 ... heartbeat_signals__fft_coefficient__attr_"abs"__coeff_84 heartbeat_signals__fft_coefficient__attr_"imag"__coeff_97 heartbeat_signals__fft_coefficient__attr_"abs"__coeff_90 heartbeat_signals__fft_coefficient__attr_"abs"__coeff_94 heartbeat_signals__fft_coefficient__attr_"abs"__coeff_92 heartbeat_signals__fft_coefficient__attr_"real"__coeff_97 heartbeat_signals__fft_coefficient__attr_"abs"__coeff_75 heartbeat_signals__fft_coefficient__attr_"real"__coeff_88 heartbeat_signals__fft_coefficient__attr_"real"__coeff_92 heartbeat_signals__fft_coefficient__attr_"real"__coeff_83
0 38.927945 0.660949 1.090709 0.848728 1.168685 0.982133 1.223496 1.236300 1.104172 1.497129 ... 0.531883 -0.047438 0.554370 0.307586 0.564596 0.562960 0.591859 0.504124 0.528450 0.473568
1 19.445634 1.718217 1.280923 1.850706 1.460752 1.924501 1.925485 1.715938 2.079957 1.818636 ... 0.563590 -0.109579 0.697446 0.398073 0.640969 0.270192 0.224925 0.645082 0.635135 0.297325
2 21.192974 1.814281 1.619051 1.215343 1.787166 2.146987 1.686190 1.540137 2.291031 2.403422 ... 0.712487 -0.074042 0.321703 0.390386 0.716929 0.316524 0.422077 0.722742 0.680590 0.383754
3 42.113066 2.109550 0.619634 2.366413 2.071539 1.000340 2.728281 1.391727 2.017176 2.610492 ... 0.601499 -0.184248 0.564669 0.623353 0.466980 0.651774 0.308915 0.550097 0.466904 0.494024
4 69.756786 0.194549 0.348882 0.092119 0.653924 0.231422 1.080003 0.711244 1.357904 1.237998 ... 0.015292 0.070505 0.065835 0.051780 0.092940 0.103773 0.179405 -0.089611 0.091841 0.056867
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
99995 63.323449 0.840651 1.186210 1.396236 0.417221 2.036034 1.659054 0.500584 1.693545 0.859932 ... 0.779955 0.005525 0.486013 0.273372 0.705386 0.602898 0.447929 0.474844 0.564266 0.133969
99996 69.657534 1.557787 1.393960 0.989147 1.611333 1.793044 1.092325 0.507138 1.763940 2.677643 ... 0.539489 0.114670 0.579498 0.417226 0.270110 0.556596 0.703258 0.462312 0.269719 0.539236
99997 40.897057 0.469758 1.000355 0.706395 1.190514 0.674603 1.632769 0.229008 2.027802 0.302457 ... 0.282597 -0.474629 0.460647 0.478341 0.527891 0.904111 0.728529 0.178410 0.500813 0.773985
99998 42.333303 0.992948 1.354894 2.238589 1.237608 1.325212 2.785515 1.918571 0.814167 2.613950 ... 0.594252 -0.162106 0.694276 0.681025 0.357196 0.498088 0.433297 0.406154 0.324771 0.340727
99999 53.290117 1.624625 1.739088 2.936555 0.154759 2.921164 2.183932 1.485150 2.685922 0.583443 ... 0.463697 0.289364 0.285321 0.422103 0.692009 0.276236 0.245780 0.269519 0.681719 -0.053993

100000 rows × 707 columns

test_features[list(train_features_filtered.columns)]
heartbeat_signals__sum_values heartbeat_signals__fft_coefficient__attr_"abs"__coeff_38 heartbeat_signals__fft_coefficient__attr_"abs"__coeff_37 heartbeat_signals__fft_coefficient__attr_"abs"__coeff_36 heartbeat_signals__fft_coefficient__attr_"abs"__coeff_35 heartbeat_signals__fft_coefficient__attr_"abs"__coeff_34 heartbeat_signals__fft_coefficient__attr_"abs"__coeff_33 heartbeat_signals__fft_coefficient__attr_"abs"__coeff_32 heartbeat_signals__fft_coefficient__attr_"abs"__coeff_31 heartbeat_signals__fft_coefficient__attr_"abs"__coeff_30 ... heartbeat_signals__fft_coefficient__attr_"abs"__coeff_84 heartbeat_signals__fft_coefficient__attr_"imag"__coeff_97 heartbeat_signals__fft_coefficient__attr_"abs"__coeff_90 heartbeat_signals__fft_coefficient__attr_"abs"__coeff_94 heartbeat_signals__fft_coefficient__attr_"abs"__coeff_92 heartbeat_signals__fft_coefficient__attr_"real"__coeff_97 heartbeat_signals__fft_coefficient__attr_"abs"__coeff_75 heartbeat_signals__fft_coefficient__attr_"real"__coeff_88 heartbeat_signals__fft_coefficient__attr_"real"__coeff_92 heartbeat_signals__fft_coefficient__attr_"real"__coeff_83
100000 19.229863 2.381214 0.832151 2.509869 1.082112 2.517858 1.656104 2.257162 2.213421 1.815374 ... 0.563470 -0.040576 0.485441 0.472059 0.448018 0.449347 0.479950 0.480448 0.442279 0.355992
100001 84.298932 0.987660 0.856174 0.616261 0.293339 0.191558 0.528684 1.010080 1.478182 1.713876 ... 0.037307 0.010074 0.272897 0.247538 0.286948 0.143829 0.189416 0.124293 0.154624 0.077530
100002 47.789921 0.696393 1.165387 1.004378 0.951231 1.542114 0.946219 1.673430 1.445220 1.118439 ... 0.738423 -0.159505 0.418298 0.566628 0.849684 0.950851 0.779324 0.439255 0.839315 0.454957
100003 47.069011 3.137668 0.044897 3.392946 3.054217 0.726293 3.582653 2.414946 1.257669 3.188068 ... 0.273142 0.366949 0.891690 0.214585 0.927562 0.648872 0.730178 0.606528 0.830105 0.662320
100004 24.899397 0.496010 1.401020 0.536501 1.712592 1.044629 1.533405 1.330258 1.251771 1.441028 ... 0.644046 -0.129700 0.578560 0.783258 0.480598 0.485003 0.667111 0.594234 0.447980 0.511133
... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ... ...
119995 43.175130 1.776937 0.211527 1.986940 0.393550 1.693620 1.139395 1.459990 1.734535 1.025180 ... 0.546742 -0.060254 0.507950 0.560192 0.541534 0.249750 0.608796 0.455444 0.535306 0.268471
119996 31.030782 1.451045 2.483726 1.105440 1.979721 2.821799 0.475276 2.782573 2.827882 0.520034 ... 0.491662 0.016413 0.480380 0.459172 0.363756 0.427028 0.544692 0.754834 0.361866 0.536087
119997 31.648623 2.141301 0.546706 2.340499 1.362651 1.942634 2.043679 0.994065 2.248144 1.007128 ... 0.529880 0.001012 0.768960 0.834159 0.672114 0.520215 0.341519 0.713419 0.664354 0.370047
119998 19.305442 0.221708 2.355288 1.051282 1.742370 2.164058 0.435583 2.649994 1.190594 2.328580 ... 0.527500 -0.103574 0.521222 0.426435 0.636887 0.446365 0.551442 0.503703 0.635246 0.258394
119999 35.204569 0.827017 0.492990 1.627089 1.106799 0.639821 1.350155 0.533904 1.332401 1.229578 ... 0.248776 0.091218 0.659750 0.636282 0.319922 0.472824 0.355830 0.346311 0.312797 0.540855

20000 rows × 707 columns

5 模型训练

x_train = train_features_filtered
y_train = data_train_label.astype(int)
x_test = test_features[list(train_features_filtered.columns)]
new_col = list(np.arange(0,x_train.shape[1]))
x_train.columns= new_col
new_col = list(np.arange(0,x_test.shape[1]))
x_test.columns= new_col
# 定义结果评价函数
def abs_sum(y_pre,y_tru):
    y_pre=np.array(y_pre)
    y_tru=np.array(y_tru)
    loss=sum(sum(abs(y_pre-y_tru)))
    return loss
# 训练模型
def cv_model(clf, train_x, train_y, test_x, clf_name):
    folds = 5
    seed = 2021
    #shuffle 表示是否打乱划分,默认False,即不打乱
    kf = KFold(n_splits=folds, shuffle=True, random_state=seed) # 5折交叉验证
    #生成测试集的概率矩阵(0矩阵)
    test = np.zeros((test_x.shape[0],4))

    #5折的精度
    cv_scores = []
    
    #默认sparse参数为True,编码后返回的是一个稀疏矩阵的对象,如果要使用一般要调用toarray()方法转化成array对象。
    #若将sparse参数设置为False,则直接生成array对象,可直接使用。
    onehot_encoder = OneHotEncoder(sparse=False) # 数据转换
    
    #enumerate是枚举函数
    """
    eg:
    list1 = ["这", "是", "一个", "测试"]
    for index, item in enumerate(list1):
        print index, item
    >>>
    0 这
    1 是
    2 一个
    3 测试
    """
    ##这里使用的应该是5折交叉验证,i表示第几个模型,train_index、valid_index表示对应的训练集和测试集索引
    for i, (train_index, valid_index) in enumerate(kf.split(train_x, train_y)):
        print('************************************ {} ************************************'.format(str(i+1)))
        trn_x, trn_y, val_x, val_y = train_x.iloc[train_index], train_y[train_index], train_x.iloc[valid_index], train_y[valid_index]
        
        #传入的是lgb,表示使用lgb模型,下面是对应的参数
        if clf_name == "lgb":
            #对应的是训练集和测试集
            train_matrix = clf.Dataset(trn_x, label=trn_y)
            valid_matrix = clf.Dataset(val_x, label=val_y)
            
            #lgb中对应的参数
            params = {
    
    
                'boosting_type': 'gbdt',
                'objective': 'multiclass',
                'num_class': 4,
                'num_leaves': 2 ** 5,
                'feature_fraction': 0.8,
                'bagging_fraction': 0.8,
                'bagging_freq': 4,
                'learning_rate': 0.1,
                'seed': seed,
                'nthread': 28,
                'n_jobs':24,
                'verbose': -1,
            }
            
            #模型:传入训练集和测试集
            #num_boost_round 这是指提升迭代的个数
            #verbose_eval 每隔100个迭代输出一次。
            #early_stopping_rounds早停次数
            model = clf.train(params, 
                      train_set=train_matrix, 
                      valid_sets=valid_matrix, 
                      num_boost_round=2000, 
                      verbose_eval=100, 
                      early_stopping_rounds=200)
            
            #这是是测试集的预测,但是还有test的预测
            val_pred = model.predict(val_x, num_iteration=model.best_iteration)
            test_pred = model.predict(test_x, num_iteration=model.best_iteration) 
            
       #将测试集的预测,变为只有1列,然后oneHot
        val_y=np.array(val_y).reshape(-1, 1)
        val_y = onehot_encoder.fit_transform(val_y)
        print('预测的概率矩阵为:')
        
        #这里是输出之前tes对应的概率
        print(test_pred)
        test += test_pred
        #----------------------------------------------------------->>here,这里是对应的评分标准
        score=abs_sum(val_y, val_pred)
        cv_scores.append(score)
        print(cv_scores)
    print("%s_scotrainre_list:" % clf_name, cv_scores)
    print("%s_score_mean:" % clf_name, np.mean(cv_scores))
    print("%s_score_std:" % clf_name, np.std(cv_scores))
    #对于5折后的所有概率的累计和,然后除以多少个模型
    test=test/kf.n_splits

    return test
# 采用基于GBDT算法的LightGBM框架建模,速度更快
def lgb_model(x_train, y_train, x_test):
    lgb_test = cv_model(lgb, x_train, y_train, x_test, "lgb")
    return lgb_test
lgb_test = lgb_model(x_train, y_train, x_test)
************************************ 1 ************************************
Training until validation scores don't improve for 200 rounds
[100]	valid_0's multi_logloss: 0.0404083
[200]	valid_0's multi_logloss: 0.0420551
[300]	valid_0's multi_logloss: 0.0489518
Early stopping, best iteration is:
[123]	valid_0's multi_logloss: 0.0399632
预测的概率矩阵为:
[[9.99698070e-01 2.70039109e-04 2.69411380e-05 4.94999289e-06]
 [1.21746804e-05 5.85496993e-05 9.99927738e-01 1.53724157e-06]
 [9.83607048e-07 9.53088446e-06 5.78380704e-06 9.99983702e-01]
 ...
 [1.28588929e-01 1.05085229e-04 8.71214661e-01 9.13245122e-05]
 [9.99898884e-01 9.44640033e-05 5.24433804e-06 1.40752937e-06]
 [9.96917117e-01 8.77305702e-04 9.33462129e-04 1.27211560e-03]]
[624.0616253339873]
************************************ 2 ************************************
[LightGBM] [Warning] num_threads is set with nthread=28, will be overridden by n_jobs=24. Current value: num_threads=24
Training until validation scores don't improve for 200 rounds
[100]	valid_0's multi_logloss: 0.0409499
[200]	valid_0's multi_logloss: 0.0430094
[300]	valid_0's multi_logloss: 0.0509631
Early stopping, best iteration is:
[132]	valid_0's multi_logloss: 0.0403365
预测的概率矩阵为:
[[9.99766094e-01 2.27649008e-04 5.10883800e-06 1.14815289e-06]
 [4.90135572e-06 1.76149950e-05 9.99976985e-01 4.98221302e-07]
 [5.02979604e-07 1.75218549e-06 3.95701033e-06 9.99993788e-01]
 ...
 [1.88770643e-01 1.68366757e-04 8.11037284e-01 2.37062540e-05]
 [9.99929675e-01 5.78811648e-05 1.17246894e-05 7.18797217e-07]
 [9.78389668e-01 8.11758773e-03 9.64454422e-03 3.84820021e-03]]
[624.0616253339873, 570.6563156765974]
************************************ 3 ************************************
[LightGBM] [Warning] num_threads is set with nthread=28, will be overridden by n_jobs=24. Current value: num_threads=24
Training until validation scores don't improve for 200 rounds
[100]	valid_0's multi_logloss: 0.0347992
[200]	valid_0's multi_logloss: 0.0363279
[300]	valid_0's multi_logloss: 0.0424064
Early stopping, best iteration is:
[127]	valid_0's multi_logloss: 0.0340505
预测的概率矩阵为:
[[9.99609883e-01 3.70753146e-04 1.32602819e-05 6.10350667e-06]
 [1.22141545e-05 4.55585638e-05 9.99941288e-01 9.39131609e-07]
 [6.81621049e-07 3.79067446e-06 5.15956743e-06 9.99990368e-01]
 ...
 [6.00128813e-02 1.98460513e-04 9.39774930e-01 1.37284585e-05]
 [9.99889796e-01 1.01020579e-04 7.44737754e-06 1.73593239e-06]
 [9.93567672e-01 2.09674541e-03 9.72441138e-04 3.36314145e-03]]
[624.0616253339873, 570.6563156765974, 529.4810745605361]
************************************ 4 ************************************
[LightGBM] [Warning] num_threads is set with nthread=28, will be overridden by n_jobs=24. Current value: num_threads=24
Training until validation scores don't improve for 200 rounds
[100]	valid_0's multi_logloss: 0.0426303
[200]	valid_0's multi_logloss: 0.0466983
[300]	valid_0's multi_logloss: 0.0544747
Early stopping, best iteration is:
[106]	valid_0's multi_logloss: 0.0425068
预测的概率矩阵为:
[[9.99676694e-01 2.73223539e-04 4.25582706e-05 7.52417657e-06]
 [1.26638715e-05 1.05055412e-04 9.99880095e-01 2.18565228e-06]
 [4.06183581e-06 1.50831540e-05 2.03243762e-05 9.99960531e-01]
 ...
 [1.60476892e-01 2.13563287e-04 8.39219569e-01 8.99752495e-05]
 [9.99714443e-01 2.55687992e-04 2.45784124e-05 5.29012122e-06]
 [9.72822921e-01 7.10275449e-03 9.99610715e-03 1.00782169e-02]]
[624.0616253339873, 570.6563156765974, 529.4810745605361, 652.3274745655527]
************************************ 5 ************************************
[LightGBM] [Warning] num_threads is set with nthread=28, will be overridden by n_jobs=24. Current value: num_threads=24
Training until validation scores don't improve for 200 rounds
[100]	valid_0's multi_logloss: 0.0374434
[200]	valid_0's multi_logloss: 0.0388786
[300]	valid_0's multi_logloss: 0.0451416
Early stopping, best iteration is:
[122]	valid_0's multi_logloss: 0.0366685
预测的概率矩阵为:
[[9.99744769e-01 2.35103603e-04 1.49822504e-05 5.14491703e-06]
 [1.17520698e-05 1.20944642e-04 9.99865223e-01 2.08049130e-06]
 [1.51365352e-06 3.32215936e-06 3.60549178e-06 9.99991559e-01]
 ...
 [3.74454974e-02 7.66420172e-05 9.62455775e-01 2.20853869e-05]
 [9.99837260e-01 1.52861288e-04 7.38539621e-06 2.49381247e-06]
 [9.67159636e-01 2.84279467e-03 1.31817376e-02 1.68158322e-02]]
[624.0616253339873, 570.6563156765974, 529.4810745605361, 652.3274745655527, 558.9042176962932]
lgb_scotrainre_list: [624.0616253339873, 570.6563156765974, 529.4810745605361, 652.3274745655527, 558.9042176962932]
lgb_score_mean: 587.0861415665934
lgb_score_std: 44.73504596598341
import pandas as pd
temp=pd.DataFrame(lgb_test)
result=pd.read_csv('sample_submit.csv')
result['label_0']=temp[0]
result['label_1']=temp[1]
result['label_2']=temp[2]
result['label_3']=temp[3]
result.to_csv('./时间序列1.0.csv',index=False)

猜你喜欢

转载自blog.csdn.net/The_dream1/article/details/115266429
今日推荐