SVM→8.SVM实战→4.Custom Kernels及多分类

《SVM→8.SVM实战→4.Custom Kernels及多分类》


  1. Custom Kernels
描述及结果 代码
  • 使用python函数作为核

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm, datasets

# import some data to play with
iris = datasets.load_iris()
X = iris.data[:, :2]  # we only take the first two features. We could
                      # avoid this ugly slicing by using a two-dim dataset
Y = iris.target


def my_kernel(X, Y):
    """
    We create a custom kernel:

                 
    k(X, Y) = X•Y.T
                 
    """
    return np.dot(X, Y.T)


h = .02  # step size in the mesh

# we create an instance of SVM and fit out data.
clf = svm.SVC(kernel=my_kernel)
clf.fit(X, Y)

# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
Z = clf.predict(np.vstack([xx.ravel(), yy.ravel()]).T)

# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.Paired)

# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.Paired, edgecolors='k')
plt.title('3-Class classification using Support Vector Machine with custom'
          ' kernel')
plt.show()
np.unique(Z)=array([0, 1, 2])
  • 使用Gram矩阵
    1. 参数kernel='precomputed'
    2. 使用fit方法时取代数据X为Gram矩阵
    3. 使用predict方法时Gram矩阵为Xtest*Xtrain'
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm, datasets

# import some data to play with
iris = datasets.load_iris()
X = iris.data[:, :2]  # we only take the first two features. We could
                      # avoid this ugly slicing by using a two-dim dataset
Y = iris.target

gram=np.dot(X, X.T)
    
h = .02  # step size in the mesh

# we create an instance of SVM and fit out data.
clf = svm.SVC(kernel='precomputed')
clf.fit(gram, Y)

# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
Xtest=np.vstack([xx.ravel(), yy.ravel()]).T
Z = clf.predict(np.dot(Xtest, X.T))

# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.Paired)

# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.Paired, edgecolors='k')
plt.title('3-Class classification using Support Vector Machine with custom'
          ' kernel')
plt.show()
  • 使用linear核与前两者进行对比
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
import numpy as np
import matplotlib.pyplot as plt
from sklearn import svm, datasets

# import some data to play with
iris = datasets.load_iris()
X = iris.data[:, :2]  # we only take the first two features. We could
# avoid this ugly slicing by using a two-dim dataset
Y = iris.target

h = .02  # step size in the mesh

# we create an instance of SVM and fit out data.
clf = svm.SVC(kernel='linear')
clf.fit(X, Y)

# Plot the decision boundary. For that, we will assign a color to each
# point in the mesh [x_min, x_max]x[y_min, y_max].
x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1
y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))
Xtest = np.vstack([xx.ravel(), yy.ravel()]).T
Z = clf.predict(Xtest)

# Put the result into a color plot
Z = Z.reshape(xx.shape)
plt.pcolormesh(xx, yy, Z, cmap=plt.cm.Paired)

# Plot also the training points
plt.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.Paired, edgecolors='k')
plt.title('3-Class classification using Support Vector Machine with linear'
          ' kernel')
plt.show()
  1. 多分类
描述 代码
  • one-vs-one
    1. 在每两类间训练一个分类器,共计训练Cn2=n_classes * (n_classes - 1) / 2个分类器;当对一个未知样本进行分类时,每个分类器都对其类别进行判断.并为相应的类别“投上一票”,最后得票最多的类别即作为该未知样本的类别。
    2. 可能存在多个类的票数相同的情况,从而使未知样本同时属于多个类别,影响分类精度。
1
clf = svm.SVC(decision_function_shape='ovo')
可在创建SVC对象时添加参数decision_function_shape
  • one-vs-rest
    1. 训练n_classes个分类器,训练时第i个分类机取训练集中第i类为正类,其余类别点为负类进行训练;当对一个未知样本进行分类时,得到n_classes个输出值fi=sgn(gi(x)),若只有一个+1出现,则类别为+1对应的分类器的正类。
    2. 若有多个+1出现时,比较g(x),类别为g(x)取最大时对应分类器的正类。

1
clf = svm.SVC(decision_function_shape='ovr')






猜你喜欢

转载自www.cnblogs.com/LeisureZhao/p/9752747.html