机器学习初体验之sklearn聚合类算法整理

初步体验了使用py的sklearn库,实现机器学习的训练和测试过程。
主要算法有随机深林,自助聚合,正向激励等等。
因为这几个方法的训练和测试都极其类似,因此用类的形式将几种算法集中在一起便于理解和翻看。代码如下:

运行main代码:

'''
试一试不同的方法
'''

from RandomForest import RandomForest
from Bagging import Bagging
from Boosting import Boosting
from ExtraTrees import ExtraTrees

class Main(object):
    @staticmethod
    def ChooseMethod():
        Method = int(input('请输入你想用的分析方法:\n1.随机森林分类\n2.自助聚合算法\n3.正向激励算法\n4.Extra Trees算法\n你的输入是:\n'))
        if Method == 1:
            x = RandomForest()
            x.processData()

        if Method ==2:

            x = Bagging()
            x.processData()

        if Method == 3:
            x = Boosting()
            x.processData()

        if Method == 4:
            x = ExtraTrees()
            x.processData()

if __name__ == '__main__':

    Main.ChooseMethod()

其中基类代码:基类代码主要作用,就是读取和处理数据,而其中进行参数优化和训练的代码放在了子类里。


import pandas as pd
from sklearn.model_selection import train_test_split
import abc

class Readdata(object):

    __metaclass__ = abc.ABCMeta

    def __init__(self):
        self.fname = 'train.csv'

    def read_dataet(self,fname):
        data = pd.read_csv(fname, index_col=0)
        data.drop(['Name', 'Ticket', 'Cabin'], axis=1, inplace=True)
        lables = data['Sex'].unique().tolist()
        data['Sex'] = [*map(lambda x: lables.index(x) , data['Sex'])]
        lables = data['Embarked'].unique().tolist()
        data['Embarked'] = data['Embarked'].apply(lambda n: lables.index(n))
        data = data.fillna(0)
        return data

    def processData(self): 
        train = self.read_dataet(self.fname)
        y = train['Survived'].values
        X = train.drop(['Survived'], axis=1).values
        X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)
        self.trainAndtest(X_train,y_train,X_test,y_test)
        print("\n wait...\n")
        self.optimal_para(X,y)

    @abc.abstractmethod
    def trainAndtest(self,X_train,y_train,X_test,y_test):
        pass

    @abc.abstractmethod
    def optimal_para(self,X,y):
        pass

(一个子类)随机深林代码:

import time
import numpy as np
from sklearn.model_selection import GridSearchCV
from sklearn.ensemble import RandomForestClassifier
from ReadData import Readdata
from sklearn.metrics import precision_score

class RandomForest(Readdata):

    def __init__(self):
        super(RandomForest, self).__init__()

    def trainAndtest(self, X_train, y_train, X_test, y_test):
        clf = RandomForestClassifier(min_samples_split = 12)
        clf.fit(X_train, y_train)
        y_pred = clf.predict(X_test)

        print("随机森林分类")
        print("训练集得分:", clf.score(X_train, y_train))
        print("测试集得分:", clf.score(X_test, y_test))
        print("查准率:", precision_score(y_test, y_pred))

    def optimal_para(self,X,y):

        start = time.clock()
        entropy_thresholds = np.linspace(0, 1, 50)
        gini_thresholds = np.linspace(0, 0.1, 50)

        # 设置参数矩阵:
        param_grid = [{'criterion': ['entropy'], 'min_impurity_decrease': entropy_thresholds},
                      {'criterion': ['gini'], 'min_impurity_decrease': gini_thresholds},
                      {'max_depth': np.arange(2, 10)},
                      {'min_samples_split': np.arange(2, 20)},
                      {'n_estimators': np.arange(2, 20)}
                      ]
        clf = GridSearchCV(RandomForestClassifier(), param_grid, cv=5)
        clf.fit(X, y)

        print("耗时:", time.clock() - start)
        print("best param:{0}\nbest score:{1}".format(clf.best_params_, clf.best_score_))

(另一个子类)正向激励代码:

from ReadData import Readdata
from sklearn.ensemble import AdaBoostClassifier
from sklearn.metrics import precision_score


class Boosting(Readdata):
    def __init__(self):
        super(Boosting, self).__init__()

    def trainAndtest(self, X_train, y_train, X_test, y_test):
        clf = AdaBoostClassifier()
        clf.fit(X_train, y_train)
        y_pred = clf.predict(X_test)

        print("正向激励算法")
        print("训练集得分:", clf.score(X_train, y_train))
        print("测试集得分:", clf.score(X_test, y_test))
        print("查准率:", precision_score(y_test, y_pred))

以上全部代码及数据下载地址:
下载地址
可以预见,如果不关心机器算法本身的原理实现,只关注python库的调用,似乎不是很难。并且对于调用不同算法,仅仅只有几行区别。


首先需要对数据进行出去处理:元素属性和标记需要分开。且需要把全部元素分成训练部分和测试部分。

y = train[‘Survived’].values
X = train.drop([‘Survived’], axis=1).values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)


接着就是放进分类器或是其他算法里训练:

clf = RandomForestClassifier(min_samples_split = 12)
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)

分类器里的参数是根据数据不同可以优化的,而且有很多参数都可以调试。
具体的调试方法是机器学习里的重要部分。
经过参数和训练数据的分类器,可以认为就是训练好的训练器,
接着用测试集去测试训练好的分类器,能得到预测的结果:y_pred


最后进行结果效果展示。

print(“训练集得分:”, clf.score(X_train, y_train))
print(“测试集得分:”, clf.score(X_test, y_test))
print(“查准率:”, precision_score(y_test, y_pred))

直接调用分类的现有的接口函数即可。


其中需要对算法的参数进行优化,也有很多集成的库可以调用,比如这次用到的GridSearchCV()详细的使用方法可以继续深究。

概括来说就是把所有数据分成若干分,放进分类器,对不同参数都进行步调调试,找到最优解,而整个过程可以用GridSearchCV()来实现。参看代买如下;

entropy_thresholds = np.linspace(0, 1, 50)
gini_thresholds = np.linspace(0, 0.1, 50)

    # 设置参数矩阵:
    param_grid = [{'criterion': ['entropy'], 'min_impurity_decrease': entropy_thresholds},
                  {'criterion': ['gini'], 'min_impurity_decrease': gini_thresholds},
                  {'max_depth': np.arange(2, 10)},
                  {'min_samples_split': np.arange(2, 20)},
                  {'n_estimators': np.arange(2, 20)}
                  ]
    clf = GridSearchCV(RandomForestClassifier(), param_grid, cv=5)
    clf.fit(X, y)

猜你喜欢

转载自blog.csdn.net/legalhighhigh/article/details/80601961