特征工程—1.不均衡样本集采样—SMOTE算法与ADASYN算法


  在训练二分类模型中,例如医疗诊断、网络入侵检测、信用卡反欺诈等,经常会遇到正负样本不均衡的问题。直接采用正负样本非常不均衡的数据集进行训练学习会遇到很多问题。使用不平衡数据集的挑战在于,大多数机器学习技术将忽略少数类,并且反过来在少数类上表现不佳,尽管通常最重要的是少数类(比如申请卡中的逾期)。

比如:正负样本比例是99:1,那么分类器将所有样本都预测成正样本就有99%的正确率

当碰到样本类别不均衡的情况下,占比大的类别往往会成为影响准确率的最主要因素。
这时就有两种思路来改进这种情况。

一、第一种思路:平衡采样

  第一种思路就是通过平衡采样的方法,改变正负样本非常不均衡的情况。通常会对小数据量的类别进行上采样,或者对大数据量的类别进行下采样。这里基于数据量进行选择,如果大数据量极大,那么对大数据量进行下采样(欠采样)也同样能满足训练模型的需求;如果大数据量不是很多,为了满足训练模型的需求,就需要对小数据量进行上采样(过采样)。其中SMOTE算法与ADASYN算法正是过采样里面两个常用的算法。下面对这两种算法进行简单介绍。

1.SMOTE算法

  SMOTE算法即合成少数过采样技术,顾名思义,其基本思想是:对少数类样本进行分析并根据少数类样本人工合成新样本添加到数据集中。
SMOTE算法步骤:

  1. 利用最近邻算法进行采样,计算出每个少数类样本的K个近邻
  2. 从K个近邻中随机挑选N个样本进行随机线性插值
  3. 构造新的少数类样本
    N e w = New= New= x x x i i i + r a n d ( 0 , 1 ) × +rand(0,1)× +rand(0,1)×( y y y j j j − - x x x i i i), j = 1 , 2 , . . . N j=1,2,...N j=1,2,...N
    其中 x x x i i i是少类中的一个观测点, y y y j j j是K近邻中随机抽取的样本
  4. 将新样本与原数据组合,组成新的数据集

python使用
  SMOTE过采样算法实现 python有一个专门实现的库—imblearn.over_sampling.SMOTE

import pandas as pd
from collections import Counter
from sklearn.datasets import make_classification
from imblearn.over_sampling import SMOTE

X, y = make_classification(n_classes=2, class_sep=2,
weights=[0.01, 0.99], n_informative=3, n_redundant=1, flip_y=0,
n_features=20, n_clusters_per_class=1, n_samples=10000, random_state=10)
print('Original dataset shape %s' % Counter(y))

# 此时正负样本比大致为99:1
# 该数据集是公开的申请评分卡数据集,质量比较高

# 比如:经过过采样,我们想把数据集正负样本比例设置成10:1
# sampling_strategy这个参数控制采样后负样本占总样本的比例,这一个尤其注意
sm = SMOTE(sampling_strategy=0.1, random_state=10, k_neighbors=5, n_jobs=-1)

X_res, y_res = sm.fit_resample(X, y)
print('Resampled dataset shape %s' % Counter(y_res))
Original dataset shape Counter({
    
    1: 9900, 0: 100})
Resampled dataset shape Counter({
    
    1: 9900, 0: 990})

2.SMOTE与RandomUnderSampler进行结合

  在SMOTE的原始论文中,建议将少数类别SMOTE过采样与多数类别的随机欠采样相结合。

from collections import Counter
from sklearn.datasets import make_classification
from imblearn.over_sampling import SMOTE
from imblearn.under_sampling import RandomUnderSampler
from imblearn.pipeline import Pipeline

X, y = make_classification(n_classes=2, class_sep=2,
                           weights=[0.01, 0.99], n_informative=3, n_redundant=1, flip_y=0,
                           n_features=20, n_clusters_per_class=1, n_samples=100000, random_state=10)
print('Original dataset shape %s' % Counter(y))
# 此时正负样本比大致为99:1

# 在总样本非常多的情况下,可以先对多类样本欠采样,然后对少数样本过采样
# sampling_strategy这个参数控制采样后负样本占总样本的比例
pipeline = Pipeline([('over', SMOTE(sampling_strategy=0.1)),
                     ('under', RandomUnderSampler(sampling_strategy=0.5))
                     ])

X_res, y_res = pipeline.fit_resample(X, y)
print('Resampled dataset shape %s' % Counter(y_res))
Original dataset shape Counter({
    
    1: 99000, 0: 1000})
Resampled dataset shape Counter({
    
    1: 19800, 0: 9900})

使用管道时容易遇到的坑:
TypeError: All intermediate steps should be transformers and implement fit and transform or be the string ‘passthrough’ ‘SMOTE(sampling_strategy=0.1)’ (type <class ‘imblearn.over_sampling._smote.base.SMOTE’>) doesn’t
这里要求使用imblearn.pipeline
因为:链接

You should import make_pipeline from imblearn.pipeline and not from sklearn.pipeline: make_pipeline from sklearn needs the transformers to implement fit and transform methods but SMOTE does not implement transform

3.Borderline-SMOTE与SVMSMOTE

  Borderline-SMOTE与SMOTE算法的区别是使用SVM算法而不是KNN来识别决策边界上少数类样本。通过在原始训练集上训练标准SVM分类器后获得的支持向量来近似边界线区域。将使用插值法沿连接每个少数群体支持向量及其最近邻的一些直线随机创建新的样本。Borderline-SMOTE算法与SVMSMOTE算法效果一样。

from collections import Counter
from sklearn.datasets import make_classification
from imblearn.over_sampling import BorderlineSMOTE


X, y = make_classification(n_classes=2, class_sep=2,
                           weights=[0.01, 0.99], n_informative=3, n_redundant=1, flip_y=0,
                           n_features=20, n_clusters_per_class=1, n_samples=100000, random_state=10)

print('Original dataset shape %s' % Counter(y))

# 过采样
oversample = BorderlineSMOTE(sampling_strategy=0.5)
X, y = oversample.fit_resample(X, y)
print('Resampled dataset shape %s' % Counter(y))
from collections import Counter
from sklearn.datasets import make_classification
from imblearn.over_sampling import SVMSMOTE


X, y = make_classification(n_classes=2, class_sep=2,
                           weights=[0.01, 0.99], n_informative=3, n_redundant=1, flip_y=0,
                           n_features=20, n_clusters_per_class=1, n_samples=100000, random_state=10)

print('Original dataset shape %s' % Counter(y))

# 过采样
oversample = SVMSMOTE(sampling_strategy=0.5)
X, y = oversample.fit_resample(X, y)
print('Resampled dataset shape %s' % Counter(y))
Original dataset shape Counter({
    
    1: 99000, 0: 1000})
Resampled dataset shape Counter({
    
    1: 99000, 0: 49500})

4.ADASYN

  ADASYN是自适应综合过采样方法,ADASYN算法的关键思想是使用密度分布作为准则来自动确定每个少数数据示例需要生成的合成样本的数量,即在少数实例的密度较低的特征空间区域中生成更多的合成实例,而在密度较高的特征空间区域生成较少的合成实例

from collections import Counter
from sklearn.datasets import make_classification
from imblearn.over_sampling import ADASYN


X, y = make_classification(n_classes=2, class_sep=2,
                           weights=[0.01, 0.99], n_informative=3, n_redundant=1, flip_y=0,
                           n_features=20, n_clusters_per_class=1, n_samples=100000, random_state=10)
print('Original dataset shape %s' % Counter(y))

# transform the dataset
oversample = ADASYN(sampling_strategy=0.5)
X, y = oversample.fit_resample(X, y)
print('Resampled dataset shape %s' % Counter(y))

Original dataset shape Counter({
    
    1: 99000, 0: 1000})
Resampled dataset shape Counter({
    
    1: 99000, 0: 49708})

5.平衡采样与决策树结合

from sklearn.datasets import make_classification
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import RepeatedStratifiedKFold
from sklearn.tree import DecisionTreeClassifier

X, y = make_classification(n_samples=10000, n_features=2, n_redundant=0,
	n_clusters_per_class=1, weights=[0.01,0.99], flip_y=0, random_state=1)

model = DecisionTreeClassifier()
# 分层抽样(30个)
cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1)
# 交差验证的10折数据集上与总数据集有一样的分布
scores = cross_val_score(model, X, y, scoring='roc_auc', cv=cv, n_jobs=-1)
print('Mean ROC AUC: %.3f' % mean(scores))
Mean ROC AUC: 0.769

SMOTE采样

from numpy import mean
from sklearn.datasets import make_classification
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import RepeatedStratifiedKFold
from sklearn.tree import DecisionTreeClassifier
from imblearn.pipeline import Pipeline
from imblearn.over_sampling import SMOTE

X, y = make_classification(n_samples=10000, n_features=2, n_redundant=0,
	n_clusters_per_class=1, weights=[0.99], flip_y=0, random_state=1)

steps = [('over', SMOTE()), ('model', DecisionTreeClassifier())]
pipeline = Pipeline(steps=steps)

cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1)
scores = cross_val_score(pipeline, X, y, scoring='roc_auc', cv=cv, n_jobs=-1)
print('Mean ROC AUC: %.3f' % mean(scores))
Mean ROC AUC: 0.825

SMOTE与RandomUnderSampler进行结合

from numpy import mean
from sklearn.datasets import make_classification
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import RepeatedStratifiedKFold
from sklearn.tree import DecisionTreeClassifier
from imblearn.pipeline import Pipeline
from imblearn.over_sampling import SMOTE
from imblearn.under_sampling import RandomUnderSampler
# define dataset
X, y = make_classification(n_samples=10000, n_features=2, n_redundant=0,
	n_clusters_per_class=1, weights=[0.99], flip_y=0, random_state=1)

model = DecisionTreeClassifier()
# 正负样本比例达到2:1
over = SMOTE(sampling_strategy=0.1)
under = RandomUnderSampler(sampling_strategy=0.5)
steps = [('over', over), ('under', under), ('model', model)]
pipeline = Pipeline(steps=steps)

cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1)
scores = cross_val_score(pipeline, X, y, scoring='roc_auc', cv=cv, n_jobs=-1)
print('Mean ROC AUC: %.3f' % mean(scores))
Mean ROC AUC: 0.849

SMOTE算法中近邻的数目的影响

from numpy import mean
from sklearn.datasets import make_classification
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import RepeatedStratifiedKFold
from sklearn.tree import DecisionTreeClassifier
from imblearn.pipeline import Pipeline
from imblearn.over_sampling import SMOTE
from imblearn.under_sampling import RandomUnderSampler

# define dataset
X, y = make_classification(n_samples=10000, n_features=2, n_redundant=0,
                           n_clusters_per_class=1, weights=[0.99], flip_y=0, random_state=1)
# values to evaluate
k_values = [1, 2, 3, 4, 5, 6, 7]
for k in k_values:
    # define pipeline
    model = DecisionTreeClassifier()
    over = SMOTE(sampling_strategy=0.1, k_neighbors=k)
    under = RandomUnderSampler(sampling_strategy=0.5)
    steps = [('over', over), ('under', under), ('model', model)]
    pipeline = Pipeline(steps=steps)
    # evaluate pipeline
    cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1)
    scores = cross_val_score(pipeline, X, y, scoring='roc_auc', cv=cv, n_jobs=-1)
    score = mean(scores)
    print('> k=%d, Mean ROC AUC: %.3f' % (k, score))
> k=1, Mean ROC AUC: 0.823
> k=2, Mean ROC AUC: 0.825
> k=3, Mean ROC AUC: 0.842
> k=4, Mean ROC AUC: 0.846
> k=5, Mean ROC AUC: 0.840
> k=6, Mean ROC AUC: 0.843
> k=7, Mean ROC AUC: 0.855

SVMSMOTE

from numpy import mean
from sklearn.datasets import make_classification
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import RepeatedStratifiedKFold
from sklearn.tree import DecisionTreeClassifier
from imblearn.pipeline import Pipeline
from imblearn.over_sampling import SVMSMOTE
from imblearn.under_sampling import RandomUnderSampler

# define dataset
X, y = make_classification(n_samples=10000, n_features=2, n_redundant=0,
                           n_clusters_per_class=1, weights=[0.99], flip_y=0, random_state=1)
# values to evaluate
k_values = [1, 2, 3, 4, 5, 6, 7]
for k in k_values:
    # define pipeline
    model = DecisionTreeClassifier()
    over = SVMSMOTE(sampling_strategy=0.1, k_neighbors=k)
    under = RandomUnderSampler(sampling_strategy=0.5)
    steps = [('over', over), ('under', under), ('model', model)]
    pipeline = Pipeline(steps=steps)
    # evaluate pipeline
    cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1)
    scores = cross_val_score(pipeline, X, y, scoring='roc_auc', cv=cv, n_jobs=-1)
    score = mean(scores)
    print('> k=%d, Mean ROC AUC: %.3f' % (k, score))
> k=1, Mean ROC AUC: 0.839
> k=2, Mean ROC AUC: 0.848
> k=3, Mean ROC AUC: 0.840
> k=4, Mean ROC AUC: 0.855
> k=5, Mean ROC AUC: 0.844
> k=6, Mean ROC AUC: 0.846
> k=7, Mean ROC AUC: 0.846

ADASYN

from numpy import mean
from sklearn.datasets import make_classification
from sklearn.model_selection import cross_val_score
from sklearn.model_selection import RepeatedStratifiedKFold
from sklearn.tree import DecisionTreeClassifier
from imblearn.pipeline import Pipeline
from imblearn.over_sampling import SVMSMOTE, ADASYN
from imblearn.under_sampling import RandomUnderSampler

# define dataset
X, y = make_classification(n_samples=10000, n_features=2, n_redundant=0,
                           n_clusters_per_class=1, weights=[0.99], flip_y=0, random_state=1)

# define pipeline
model = DecisionTreeClassifier()
over = ADASYN(sampling_strategy=0.1)
under = RandomUnderSampler(sampling_strategy=0.5)
steps = [('over', over), ('under', under), ('model', model)]
pipeline = Pipeline(steps=steps)
# evaluate pipeline
cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1)
scores = cross_val_score(pipeline, X, y, scoring='roc_auc', cv=cv, n_jobs=-1)
score = mean(scores)
print(' Mean ROC AUC: %.3f' % (score))
 Mean ROC AUC: 0.826

二、第二种思路:使用新的指标

  基于准确率指标本身对于正负样本非常不均衡样本所存在的问题,即占比大的类别往往会成为影响准确率的最主要因素。所以我们不妨采用精确率与召回率。在正样本上与负样本上都有足够的精确率与召回率,或者使用AUC.

精确率是分类正确的正样本个数占分类器判定为正样本的样本个数的比例。
召回率是分类正确的正样本占真正正样本的个数。

使用新的指标也没法抵消不均衡数据集对模型造成的影响。少数类样本对于模型而言太少,仍然无法有效地学习决策边界。

参考:

猜你喜欢

转载自blog.csdn.net/weixin_46649052/article/details/114735469