python imblearn toolbox 解决数据不平衡问题(四)——联合采样、集成采样、其它细节

一、Combination of over- and under-sampling

主要是解决SMOTE算法中生成噪声样本,解决方法为cleaning the space resulting from over-sampling。
主要思路是先使用SMOTE进行上采样,再通过Tomek’s link或者edited nearest-neighbours方法去获得一个
cleaner space.对应的函数为:SMOTETomekSMOTEENN.

from imblearn.combine import SMOTEENN
smote_enn = SMOTEENN(random_state=0)
X_resampled, y_resampled = smote_enn.fit_resample(X, y)

from imblearn.combine import SMOTETomek
smote_tomek = SMOTETomek(random_state=0)
X_resampled, y_resampled = smote_tomek.fit_resample(X, y)

二、Ensemble of samplers

2.1 Bagging classifier

**Bagging:**有放回的取出样本产生样本的不同子集,再在每个子集上建立分类器(要给定分类器类型)。
在scikit-learn中,有类BaggingClassifier,但对于不平衡数据,不能保证每个子集的数据是平衡的,因此分类结果会偏向多数类。
在imblearn中,类BalaceBaggingClassifier使得在训练每个分类器之前,在每个子集上进行重采样,其参数与sklearn中的BaggingClassifier相同,除了增加了两个参数:sampling_strategyreplacement来控制随机下采样的方式。

from imblearn.ensemble import BalancedBaggingClassifier
from sklearn.metrics import balanced_accuracy_score
bbc = BalancedBaggingClassifier(base_estimator=DecisionTreeClassifier(),
                                sampling_strategy='auto',
                                replacement=False,
                                random_state=0)
bbc.fit(X_train, y_train)
y_pred =bbc.predict(X_test)
balanced_accuracy_score(y_test, y_pred)#计算平衡精度

2.2 Forest of randomized trees (随机森林)

在构建每棵树时使用平衡的bootstrap数据子集。

from imblearn.ensemble import BalancedRandomForestClassifier
brf = BalancedRandomForestClassifier(n_estimators=100,random_state=0)
brf.fit(X_train, y_train)

2.3 Boosting

在数据集子集上训练n个弱分类器,对这n个弱分类器进行加权融合,产生最后结果的分类器.

2.3.1 RUSBoostClassifier

在执行boosting迭代之前执行一个随机下采样。

from imblearn.ensemble import RUSBoostClassifier
rusboost  = RUSBoostClassifier(random_state=0)
rusboost.fit(X_train, y_train)

2.3.2 EasyEnsembleClassifier,即采用Adaboost

计算弱分类器的错误率,对错误分类的样本分配更大的权值,正确分类的样本赋予更小权值。只要分类精度大于0.5即可做最终分类器中一员,弱分类器精度越高,权重越大。

from imblearn.ensemble import EasyEnsembleClassifier
eec = EasyEnsembleClassifier(random_state=0)
eec.fit(X_train, y_train)

三、Miscellaneous samplers

3.1 Custom sampler (自定义采样器):FunctionSampler

from imblearn import FunctionSampler
def fuc(X, y):
    return X[:10], y[:10]
sampler = FunctionSampler(func=func)
X_res, y_res = sampler.fit_resample(X, y)

3.2 Custom generators (为TensorFlow和Keras生成平衡的mini-batches)

3.2.1 Tensorflow generator: imblearn.tensorflow.balanced_batch_generator

import numpy as np
X = X.astype(np.float32)
from imblearn.under_sampling import RandomUnderSampler
from imblearn.tensorflow import balanced_batch_generator
training_generator, steps_per_epoch = balanced_batch_generator(
    X, y, sample_weight=None, sampler=RandomUnderSampler(),
    batch_size=10, random_state=42)

#training_generator和 steps_per_epoch的使用方法:
learning_rate, epochs = 0.01, 10
input_size, output_size = X.shape[1], 3
import tensorflow as tf
def init_weights(shape):
     return tf.Variable(tf.random_normal(shape, stddev=0.01))
def accuracy(y_true, y_pred):
     return np.mean(np.argmax(y_pred, axis=1) == y_true)
 # input and output
data = tf.placeholder("float32", shape=[None, input_size])
targets = tf.placeholder("int32", shape=[None])
# build the model and weights
W = init_weights([input_size, output_size])
b = init_weights([output_size])
out_act = tf.nn.sigmoid(tf.matmul(data, W) + b)
# build the loss, predict, and train operator
cross_entropy = tf.nn.sparse_softmax_cross_entropy_with_logits(
     logits=out_act, labels=targets)
loss = tf.reduce_sum(cross_entropy)
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
train_op = optimizer.minimize(loss)
predict = tf.nn.softmax(out_act)
# Initialization of all variables in the graph
init = tf.global_variables_initializer()
with tf.Session() as sess:
     print('Starting training')
     sess.run(init)
     for e in range(epochs):
         for i in range(steps_per_epoch):  ##主要是这里
             X_batch, y_batch = next(training_generator) ##主要是这里
             sess.run([train_op, loss], feed_dict={data: X_batch, targets: y_batch})
         # For each epoch, run accuracy on train and test
         feed_dict = dict()
         predicts_train = sess.run(predict, feed_dict={data: X})
         print("epoch: {} train accuracy: {:.3f}"
               .format(e, accuracy(y, predicts_train)))

3.2 Keras generator

##定义一个逻辑回归模型
import keras
y = keras.utils.to_categorical(y, 3)
model = keras.Sequential()
model.add(keras.layers.Dense(y.shape[1], input_dim=X.shape[1],
                              activation='softmax'))
model.compile(optimizer='sgd', loss='categorical_crossentropy',
               metrics=['accuracy'])
##keras.balanced_batch_generator生成平衡的min-batch
from imblearn.keras import balanced_batch_generator
training_generator, steps_per_epoch = balanced_batch_generator(
     X, y, sampler=RandomUnderSampler(), batch_size=10, random_state=42)

##或者使用keras.BalancedBatchGenerator
from imblearn.keras import BalancedBatchGenerator
training_generator = BalancedBatchGenerator(
     X, y, sampler=RandomUnderSampler(), batch_size=10, random_state=42)
callback_history = model.fit_generator(generator=training_generator,
                                        epochs=10, verbose=0)

四.Metrics(度量)

目前,sklearn对于不平衡数据的度量只有sklearn.metrics.balanced_accuracy_score.
imblearn.metrics提供了两个其它评价分类器质量的度量

4.1 Sensitivity and specificity metrics

  • Sensitivity:true positive rate即recall。
  • Specificity:true negative rate。
    因此增加了三个度量
  • sensitivity_specificity_support:输出sensitivity和pecificity和support
  • sensitivity_score
  • specificity_score

4.2 Additional metrics specific to imbalanced datasets

专门为不平衡数据增加的度量

  • geometric_mean_score:计算几何平均数(G-mean,各类sensitivity乘积的开方),具体描述如下:

The The geometric mean (G-mean) is the root of the product of class-wise sensitivity. This measure tries to maximize the accuracy on each of the classes while keeping these accuracies balanced. For binary classification G-mean is the squared root of the product of the sensitivity and specificity. For multi-class problems it is a higher root of the product of sensitivity for each class.

  • make_index_balanced_accuracy: 根据balanced accuracy平衡任何scoring function

猜你喜欢

转载自blog.csdn.net/mathlxj/article/details/89677701