几种分类模型训练iris数据集

用几种常见的分类算法对iris数据进行训练,并利K折交叉验证法进行评估

K折交叉验证:sklearn.model_selection.KFold(n_splits=k, shuffle=False, random_state=None)

思路:将训练/测试数据集划分n_splits个互斥子集,每次用其中一个子集当作验证集,剩下的n_splits-1个作为训练集,进行n_splits次训练和测试,得到n_splits个结果

参数说明:
n_splits:表示划分几等份
shuffle:在每次划分时,是否进行洗牌
①若为Falses时,其效果等同于random_state等于整数,每次划分的结果相同
②若为True时,每次划分的结果都不一样,表示经过洗牌,随机取样的
random_state:随机种子数

数据集:iris(本例为从本地文件获取)

代码


import pandas as pd
import numpy as np

from sklearn.metrics import accuracy_score
from sklearn.model_selection import KFold


from sklearn import tree
from sklearn import naive_bayes
from sklearn import svm
from sklearn.ensemble import RandomForestClassifier, GradientBoostingClassifier
from sklearn.neural_network import MLPClassifier


#读取数据
data = pd.read_csv('iris.csv',header=None)
data.columns = ["花萼长度","花萼宽度","花瓣长度","花瓣宽度","category"]

X = data.iloc[:,0:4]
Y = data.iloc[:,4]


k = 10
kf = KFold(n_splits=k, shuffle=True)

def eval_model(model_name,model):
    accuracies = []
    i=0
    for train_index, test_index in kf.split(data): #拆分
        x_train, x_test = X.loc[train_index] ,X.loc[test_index]
        y_train, y_test = Y.loc[train_index] ,Y.loc[test_index]
        
        model.fit(x_train,y_train) #训练
        y_predict = model.predict(x_test) #预测
        
        accuracy = accuracy_score(y_pred=y_predict,y_true=y_test) #精度
        accuracies.append(accuracy)
        i+=1
        print('第{}轮: {}'.format(i,accuracy))
        
    print(model_name+"模型精度: ",np.mean(accuracies))
    
    
models={
        'decision tree':lambda:tree.DecisionTreeClassifier(),
        'random forest':lambda:RandomForestClassifier(n_estimators=100),
        'naive bayes':lambda:naive_bayes.GaussianNB(),
        'svm':lambda:svm.SVC(gamma='scale'),
        'GBDT':lambda:GradientBoostingClassifier(),
        'MLP':lambda:MLPClassifier(max_iter=1000),        
        }


for name,m in models.items():
    eval_model(name,m())


猜你喜欢

转载自blog.csdn.net/d1240673769/article/details/88817833