sklearn学习笔记之svm

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/qq_37195257/article/details/79902041

支持向量机:

# -*- coding: utf-8 -*-
import sklearn
from sklearn.svm import SVC
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn import datasets
import pandas as pd
import numpy


def getData_1():

    iris = datasets.load_iris()
    X = iris.data   #样本特征矩阵,150*4矩阵,每行一个样本,每个样本维度是4
    y = iris.target #样本类别矩阵,150维行向量,每个元素代表一个样本的类别


    df1=pd.DataFrame(X, columns =['SepalLengthCm','SepalWidthCm','PetalLengthCm','PetalWidthCm'])
    df1['target']=y

    return df1

df=getData_1()


X_train, X_test, y_train, y_test = train_test_split(df.iloc[:,0:3],df['target'], test_size=0.3, random_state=42)
print X_train, X_test, y_train, y_test


model = SVC(C=1.0, kernel='rbf', gamma='auto')
"""参数
---
    C:误差项的惩罚参数C
    gamma: 核相关系数。浮点数,If gamma is ‘auto’ then 1/n_features will be used instead.
"""


model.fit(X_train,y_train)
predict=model.predict(X_test)
print predict
print y_test.values

print 'SVC分类:{:.3f}'.format(model.score(X_test, y_test))

结果:

[1 0 2 1 1 0 1 2 1 1 2 0 0 0 0 1 2 1 1 2 0 2 0 2 2 2 2 2 0 0 0 0 1 0 0 2 1
 0 0 0 2 1 1 0 0]
[1 0 2 1 1 0 1 2 1 1 2 0 0 0 0 1 2 1 1 2 0 2 0 2 2 2 2 2 0 0 0 0 1 0 0 2 1
 0 0 0 2 1 1 0 0]

SVC分类:1.000

准确度惊人的100%......,比线性回归和朴素贝叶斯分类高很多。。。

猜你喜欢

转载自blog.csdn.net/qq_37195257/article/details/79902041