SVM算法整理

一、动机

      分类数据是机器学习中的一项常见任务。假设某些给定的数据点各自属于两个类之一,而目标是确定新数据点将在哪个类中。支持向量机(support vector machine,常简称为SVM)是在分类与回归分析中分类数据的监督式学习模型对于支持向量机来说,数据点被视为p维向量,而我们想知道是否可以用(p-1) 维超平面来分开这些点。

       给定一组训练实例,每个训练实例被标记为属于两个类别中的一个或另一个,SVM训练算法创建一个将新的实例分配给两个类别之一的模型,使其成为非概率二元线性分类器。


二、算法原理

1、我们考虑以下形式的n 点测试集:

{\displaystyle ({\vec {x}}_{1},y_{1}),\,\ldots ,\,({\vec {x}}_{n},y_{n})}

其中y_{i} 是 1 或者 −1,表明点{\displaystyle {\vec {x}}_{i}} 所属的类。{\displaystyle {\vec {x}}_{i}} 中每个都是一个p 维向量。我们要求将{\displaystyle y_{i}=1} 的点集{\displaystyle {\vec {x}}_{i}} 与 {\displaystyle y_{i}=-1} 的点集分开的 “最大间隔超平面”,使得超平面与最近的点{\displaystyle {\vec {x}}_{i}} 之间的距离最大化。

任何超平面都可以写作满足下面方程的点集 \vec{x}

            {\displaystyle {\vec {w}}\cdot {\vec {x}}-b=0,\,}

其中{\displaystyle {\vec {w}}}(不必是归一化的)是该法向量。参数{\displaystyle {\tfrac {b}{\|{\vec {w}}\|}}} 决定从原点沿法向量{\displaystyle {\vec {w}}} 到超平面的偏移量。

2、硬间隔

如果这些训练数据是线性可分的,可以选择分离两类数据的两个平行超平面,使得它们之间的距离尽可能大。在这两个超平面范围内的区域称为“间隔”,最大间隔超平面是位于它们正中间的超平面。这些超平面可以由方程:

{\displaystyle {\vec {w}}\cdot {\vec {x}}-b=1\,}

或是

{\displaystyle {\vec {w}}\cdot {\vec {x}}-b=-1.\,}

来表示。通过几何不难得到这两个超平面之间的距离是{\displaystyle {\tfrac {2}{\|{\vec {w}}\|}}},因此要使两平面间的距离最大,我们需要最小化{\displaystyle \|{\vec {w}}\|}。同时为了使得样本数据点都在超平面的间隔区以外,我们需要保证对于所有的i 满足其中的一个条件:

       {\displaystyle {\vec {w}}\cdot {\vec {x}}_{i}-b\geq 1,} 若 {\displaystyle y_{i}=1}

或是{\displaystyle {\vec {w}}\cdot {\vec {x}}_{i}-b\leq -1,} 若{\displaystyle y_{i}=-1.}

这些约束表明每个数据点都必须位于间隔的正确一侧。

这两个式子可以合并写作:

{\displaystyle y_{i}({\vec {w}}\cdot {\vec {x}}_{i}-b)\geq 1,\quad {\text{ for all }}1\leq i\leq n.\qquad \qquad (1)}

可以用这个式子一起来得到优化问题:

“在{\displaystyle y_{i}({\vec {w}}\cdot {\vec {x_{i}}}-b)\geq 1} 条件下,最小化 {\displaystyle \|{\vec {w}}\|},对于{\displaystyle i=1,\,\ldots ,\,n}"

这个问题的解{\displaystyle {\vec {w}}} 与b 决定了我们的分类器{\displaystyle {\vec {x}}\mapsto \operatorname {sgn}({\vec {w}}\cdot {\vec {x}}-b)}

此几何描述的一个显而易见却重要的结果是,最大间隔超平面完全是由最靠近它的那些{\displaystyle {\vec {x}}_{i}} 确定的。这些{\displaystyle {\vec {x}}_{i}} 叫做支持向量。

以下是网上找到的图片,可以说明一些问题。

3、软间隔

      为了将SVM扩展到数据线性不可分的情况,我们引入铰链损失函数,

        {\displaystyle \max \left(0,1-y_{i}({\vec {w}}\cdot {\vec {x_{i}}}-b)\right).}

       当约束条件 (1) 满足时(也就是如果{\displaystyle {\vec {x}}_{i}} 位于边距的正确一侧)此函数为零。对于间隔的错误一侧的数据,该函数的值与距间隔的距离成正比。 然后我们希望最小化

{\displaystyle \left[{\frac {1}{n}}\sum _{i=1}^{n}\max \left(0,1-y_{i}({\vec {w}}\cdot {\vec {x_{i}}}-b)\right)\right]+\lambda \lVert {\vec {w}}\rVert ^{2},}

       其中参数\lambda 用来权衡增加间隔大小与确保{\displaystyle {\vec {x}}_{i}} 位于间隔的正确一侧之间的关系。因此,对于足够小的\lambda 值,如果输入数据是可以线性分类的,则软间隔SVM与硬间隔SVM将表现相同,但即使不可线性分类,仍能学习出可行的分类规则。


三、算法求解

计算(软间隔)SVM分类器等同于使下面表达式最小化

{\displaystyle \left[{\frac {1}{n}}\sum _{i=1}^{n}\max \left(0,1-y_{i}(w\cdot x_{i}+b)\right)\right]+\lambda \lVert w\rVert ^{2}.\qquad (2)}

如上所述,由于我们关注的是软间隔分类器,\lambda 选择足够小的值就能得到线性可分类输入数据的硬间隔分类器。下面会详细介绍将(2)简化为二次规划问题的经典方法。


原型

最小化(2)可以用下面的方式改写为目标函数可微的约束优化问题。

对所有{\displaystyle i\in \{1,\,\ldots ,\,n\}} 我们引入变量{\displaystyle \zeta _{i}=\max \left(0,1-y_{i}(w\cdot x_{i}+b)\right)}。注意到{\displaystyle \zeta _{i}} 是满足{\displaystyle y_{i}(w\cdot x_{i}+b)\geq 1-\zeta _{i}} 的最小非负数。

因此,我们可以将优化问题叙述如下

{\displaystyle {\text{minimize }}{\frac {1}{n}}\sum _{i=1}^{n}\zeta _{i}+\lambda \|w\|^{2}}

{\displaystyle {\text{subject to }}y_{i}(x_{i}\cdot w+b)\geq 1-\zeta _{i}\,{\text{ and }}\,\zeta _{i}\geq 0,\,{\text{for all }}i.}

这就叫原型问题。


对偶型

通过求解上述问题的拉格朗日对偶,得到简化的问题

{\displaystyle {\text{maximize}}\,\,f(c_{1}\ldots c_{n})=\sum _{i=1}^{n}c_{i}-{\frac {1}{2}}\sum _{i=1}^{n}\sum _{j=1}^{n}y_{i}c_{i}(x_{i}\cdot x_{j})y_{j}c_{j},}

{\displaystyle {\text{subject to }}\sum _{i=1}^{n}c_{i}y_{i}=0,\,{\text{and }}0\leq c_{i}\leq {\frac {1}{2n\lambda }}\;{\text{for all }}i.}

这就叫对偶问题。由于对偶最小化问题是受线性约束的{\displaystyle c_{i}} 的二次函数,所以它可以通过二次规划算法高效地解出。 这里,变量 {\displaystyle c_{i}} 定义为满足

{\displaystyle {\vec {w}}=\sum _{i=1}^{n}c_{i}y_{i}{\vec {x}}_{i}}.

此外,当{\displaystyle {\vec {x}}_{i}} 恰好在间隔的正确一侧时{\displaystyle c_{i}=0},且当 {\displaystyle {\vec {x}}_{i}} 位于间隔的边界时{\displaystyle 0<c_{i}<(2n\lambda )^{-1}}。因此,{\displaystyle {\vec {w}}} 可以写为支持向量的线性组合。 可以通过在间隔的边界上找到一个{\displaystyle {\vec {x}}_{i}} 并求解

{\displaystyle y_{i}({\vec {w}}\cdot {\vec {x}}_{i}+b)=1\iff b=y_{i}-{\vec {w}}\cdot {\vec {x}}_{i}.}

得到偏移量{\displaystyle b}。(注意由于{\displaystyle y_{i}=\pm 1} 因而{\displaystyle y_{i}^{-1}=y_{i}}。)

核技巧

假设我们要学习与变换后数据点{\displaystyle \varphi ({\vec {x}}_{i})} 的线性分类规则对应的非线性分类规则。此外,我们有一个满足{\displaystyle k({\vec {x}}_{i},{\vec {x}}_{j})=\varphi ({\vec {x}}_{i})\cdot \varphi ({\vec {x}}_{j})} 的核函数{\displaystyle k}

我们知道变换空间中的分类向量{\displaystyle {\vec {w}}} 满足

{\displaystyle {\vec {w}}=\sum _{i=1}^{n}c_{i}y_{i}\varphi ({\vec {x}}_{i}),}

其中{\displaystyle c_{i}} 可以通过求解优化问题

{\displaystyle {\begin{aligned}{\text{maximize}}\,\,f(c_{1}\ldots c_{n})&=\sum _{i=1}^{n}c_{i}-{\frac {1}{2}}\sum _{i=1}^{n}\sum _{j=1}^{n}y_{i}c_{i}(\varphi ({\vec {x}}_{i})\cdot \varphi ({\vec {x}}_{j}))y_{j}c_{j}\\&=\sum _{i=1}^{n}c_{i}-{\frac {1}{2}}\sum _{i=1}^{n}\sum _{j=1}^{n}y_{i}c_{i}k({\vec {x}}_{i},{\vec {x}}_{j})y_{j}c_{j}\\\end{aligned}}}

{\displaystyle {\text{subject to }}\sum _{i=1}^{n}c_{i}y_{i}=0,\,{\text{and }}0\leq c_{i}\leq {\frac {1}{2n\lambda }}\;{\text{for all }}i.}

得到。与前面一样,可以使用二次规划来求解系数{\displaystyle c_{i}}。同样,我们可以找到让{\displaystyle 0<c_{i}<(2n\lambda )^{-1}} 的索引i,使得{\displaystyle \varphi ({\vec {x}}_{i})} 位于变换空间中间隔的边界上,然后求解

{\displaystyle {\begin{aligned}b={\vec {w}}\cdot \varphi ({\vec {x}}_{i})-y_{i}&=\left[\sum _{k=1}^{n}c_{k}y_{k}\varphi ({\vec {x}}_{k})\cdot \varphi ({\vec {x}}_{i})\right]-y_{i}\\&=\left[\sum _{k=1}^{n}c_{k}y_{k}k({\vec {x}}_{k},{\vec {x}}_{i})\right]-y_{i}.\end{aligned}}}

最后,可以通过计算下式来分类新点

{\displaystyle {\vec {z}}\mapsto \operatorname {sgn}({\vec {w}}\cdot \varphi ({\vec {z}})+b)=\operatorname {sgn} \left(\left[\sum _{i=1}^{n}c_{i}y_{i}k({\vec {x}}_{i},{\vec {z}})\right]+b\right).}


四 问题描述及数据来源

       基于SVM算法进行手写数字识别。

       训练数据有402txt文件,每个文件是由10构成的二值图像。每个图像是一个手写数字。测试数据有186txt文件,格式同训练数据。

五 程序代码

from numpy import *
from time import sleep

#6-1
def loadDataSet(fileName):
    dataMat = []; labelMat = []
    fr = open(fileName)
    for line in fr.readlines():
        lineArr = line.strip().split('\t')
        dataMat.append([float(lineArr[0]), float(lineArr[1])])
        labelMat.append(float(lineArr[2]))
    return dataMat,labelMat

def selectJrand(i,m):
    j=i #we want to select any J not equal to i
    while (j==i):
        j = int(random.uniform(0,m))
    return j

def clipAlpha(aj,H,L):
    if aj > H: 
        aj = H
    if L > aj:
        aj = L
    return aj

#6-2
def smoSimple(dataMatIn, classLabels, C, toler, maxIter):
    dataMatrix = mat(dataMatIn); labelMat = mat(classLabels).transpose()
    b = 0; m,n = shape(dataMatrix)
    alphas = mat(zeros((m,1)))
    iter = 0
    while (iter < maxIter):
        alphaPairsChanged = 0
        for i in range(m):
            fXi = float(multiply(alphas,labelMat).T*(dataMatrix*dataMatrix[i,:].T)) + b
            Ei = fXi - float(labelMat[i])#if checks if an example violates KKT conditions
            if ((labelMat[i]*Ei < -toler) and (alphas[i] < C)) or ((labelMat[i]*Ei > toler) and (alphas[i] > 0)):
                j = selectJrand(i,m)
                fXj = float(multiply(alphas,labelMat).T*(dataMatrix*dataMatrix[j,:].T)) + b
                Ej = fXj - float(labelMat[j])
                alphaIold = alphas[i].copy(); alphaJold = alphas[j].copy();
                if (labelMat[i] != labelMat[j]):
                    L = max(0, alphas[j] - alphas[i])
                    H = min(C, C + alphas[j] - alphas[i])
                else:
                    L = max(0, alphas[j] + alphas[i] - C)
                    H = min(C, alphas[j] + alphas[i])
                if L==H: print( "L==H"); continue
                eta = 2.0 * dataMatrix[i,:]*dataMatrix[j,:].T - dataMatrix[i,:]*dataMatrix[i,:].T - dataMatrix[j,:]*dataMatrix[j,:].T
                if eta >= 0: print( "eta>=0"); continue
                alphas[j] -= labelMat[j]*(Ei - Ej)/eta
                alphas[j] = clipAlpha(alphas[j],H,L)
                if (abs(alphas[j] - alphaJold) < 0.00001): print("j not moving enough") ; continue
                alphas[i] += labelMat[j]*labelMat[i]*(alphaJold - alphas[j])#update i by the same amount as j
                                                                        #the update is in the oppostie direction
                b1 = b - Ei- labelMat[i]*(alphas[i]-alphaIold)*dataMatrix[i,:]*dataMatrix[i,:].T - labelMat[j]*(alphas[j]-alphaJold)*dataMatrix[i,:]*dataMatrix[j,:].T
                b2 = b - Ej- labelMat[i]*(alphas[i]-alphaIold)*dataMatrix[i,:]*dataMatrix[j,:].T - labelMat[j]*(alphas[j]-alphaJold)*dataMatrix[j,:]*dataMatrix[j,:].T
                if (0 < alphas[i]) and (C > alphas[i]): b = b1
                elif (0 < alphas[j]) and (C > alphas[j]): b = b2
                else: b = (b1 + b2)/2.0
                alphaPairsChanged += 1
                print( "iter: %d i:%d, pairs changed %d" % (iter,i,alphaPairsChanged))
        if (alphaPairsChanged == 0): iter += 1
        else: iter = 0
        print( "iteration number: %d" % iter)
    return b,alphas



#6-3
class optStruct:
    def __init__(self,dataMatIn, classLabels, C, toler, kTup):  # Initialize the structure with the parameters 
        self.X = dataMatIn
        self.labelMat = classLabels
        self.C = C
        self.tol = toler
        self.m = shape(dataMatIn)[0]
        self.alphas = mat(zeros((self.m,1)))
        self.b = 0
        self.eCache = mat(zeros((self.m,2))) #first column is valid flag
        self.K = mat(zeros((self.m,self.m)))
        for i in range(self.m):
            self.K[:,i] = kernelTrans(self.X, self.X[i,:], kTup)
        
def calcEk(oS, k):
    fXk = float(multiply(oS.alphas,oS.labelMat).T*oS.K[:,k] + oS.b)
    Ek = fXk - float(oS.labelMat[k])
    return Ek
        
def selectJ(i, oS, Ei):         #this is the second choice -heurstic, and calcs Ej
    maxK = -1; maxDeltaE = 0; Ej = 0
    oS.eCache[i] = [1,Ei]  #set valid #choose the alpha that gives the maximum delta E
    validEcacheList = nonzero(oS.eCache[:,0].A)[0]
    if (len(validEcacheList)) > 1:
        for k in validEcacheList:   #loop through valid Ecache values and find the one that maximizes delta E
            if k == i: continue #don't calc for i, waste of time
            Ek = calcEk(oS, k)
            deltaE = abs(Ei - Ek)
            if (deltaE > maxDeltaE):
                maxK = k; maxDeltaE = deltaE; Ej = Ek
        return maxK, Ej
    else:   #in this case (first time around) we don't have any valid eCache values
        j = selectJrand(i, oS.m)
        Ej = calcEk(oS, j)
    return j, Ej

def updateEk(oS, k):#after any alpha has changed update the new value in the cache
    Ek = calcEk(oS, k)
    oS.eCache[k] = [1,Ek]
     
#6-4
def innerL(i, oS):
    Ei = calcEk(oS, i)
    if ((oS.labelMat[i]*Ei < -oS.tol) and (oS.alphas[i] < oS.C)) or ((oS.labelMat[i]*Ei > oS.tol) and (oS.alphas[i] > 0)):
        j,Ej = selectJ(i, oS, Ei) #this has been changed from selectJrand
        alphaIold = oS.alphas[i].copy(); alphaJold = oS.alphas[j].copy();
        if (oS.labelMat[i] != oS.labelMat[j]):
            L = max(0, oS.alphas[j] - oS.alphas[i])
            H = min(oS.C, oS.C + oS.alphas[j] - oS.alphas[i])
        else:
            L = max(0, oS.alphas[j] + oS.alphas[i] - oS.C)
            H = min(oS.C, oS.alphas[j] + oS.alphas[i])
        if L==H: print( "L==H"); return 0
        eta = 2.0 * oS.K[i,j] - oS.K[i,i] - oS.K[j,j] #changed for kernel
        if eta >= 0: print( "eta>=0"); return 0
        oS.alphas[j] -= oS.labelMat[j]*(Ei - Ej)/eta
        oS.alphas[j] = clipAlpha(oS.alphas[j],H,L)
        updateEk(oS, j) #added this for the Ecache
        if (abs(oS.alphas[j] - alphaJold) < 0.00001): print( "j not moving enough"); return 0
        oS.alphas[i] += oS.labelMat[j]*oS.labelMat[i]*(alphaJold - oS.alphas[j])#update i by the same amount as j
        updateEk(oS, i) #added this for the Ecache                    #the update is in the oppostie direction
        b1 = oS.b - Ei- oS.labelMat[i]*(oS.alphas[i]-alphaIold)*oS.K[i,i] - oS.labelMat[j]*(oS.alphas[j]-alphaJold)*oS.K[i,j]
        b2 = oS.b - Ej- oS.labelMat[i]*(oS.alphas[i]-alphaIold)*oS.K[i,j]- oS.labelMat[j]*(oS.alphas[j]-alphaJold)*oS.K[j,j]
        if (0 < oS.alphas[i]) and (oS.C > oS.alphas[i]): oS.b = b1
        elif (0 < oS.alphas[j]) and (oS.C > oS.alphas[j]): oS.b = b2
        else: oS.b = (b1 + b2)/2.0
        return 1
    else: return 0

#6-5
def smoP(dataMatIn, classLabels, C, toler, maxIter,kTup=('lin', 0)):    #full Platt SMO
    oS = optStruct(mat(dataMatIn),mat(classLabels).transpose(),C,toler, kTup)
    iter = 0
    entireSet = True; alphaPairsChanged = 0
    while (iter < maxIter) and ((alphaPairsChanged > 0) or (entireSet)):
        alphaPairsChanged = 0
        if entireSet:   #go over all
            for i in range(oS.m):        
                alphaPairsChanged += innerL(i,oS)
                print( "fullSet, iter: %d i:%d, pairs changed %d" % (iter,i,alphaPairsChanged))
            iter += 1
        else:#go over non-bound (railed) alphas
            nonBoundIs = nonzero((oS.alphas.A > 0) * (oS.alphas.A < C))[0]
            for i in nonBoundIs:
                alphaPairsChanged += innerL(i,oS)
                print( "non-bound, iter: %d i:%d, pairs changed %d" % (iter,i,alphaPairsChanged))
            iter += 1
        if entireSet: entireSet = False #toggle entire set loop
        elif (alphaPairsChanged == 0): entireSet = True  
        print( "iteration number: %d" % iter)
    return oS.b,oS.alphas

#6-6
def kernelTrans(X, A, kTup): #calc the kernel or transform data to a higher dimensional space
    m,n = shape(X)
    K = mat(zeros((m,1)))
    if kTup[0]=='lin': K = X * A.T   #linear kernel
    elif kTup[0]=='rbf':
        for j in range(m):
            deltaRow = X[j,:] - A
            K[j] = deltaRow*deltaRow.T
        K = exp(K/(-1*kTup[1]**2)) #divide in NumPy is element-wise not matrix like Matlab
    else: raise NameError('Houston We Have a Problem -- \
    That Kernel is not recognized')
    return K

def calcWs(alphas,dataArr,classLabels):
    X = mat(dataArr); labelMat = mat(classLabels).transpose()
    m,n = shape(X)
    w = zeros((n,1))
    for i in range(m):
        w += multiply(alphas[i]*labelMat[i],X[i,:].T)
    return w

#6-8
def testRbf(k1=1.3):
    dataArr,labelArr = loadDataSet('testSetRBF.txt')
    b,alphas = smoP(dataArr, labelArr, 200, 0.0001, 10000, ('rbf', k1)) #C=200 important
    datMat=mat(dataArr); labelMat = mat(labelArr).transpose()
    svInd=nonzero(alphas.A>0)[0]
    sVs=datMat[svInd] #get matrix of only support vectors
    labelSV = labelMat[svInd];
    print( "there are %d Support Vectors" % shape(sVs)[0])
    m,n = shape(datMat)
    errorCount = 0
    for i in range(m):
        kernelEval = kernelTrans(sVs,datMat[i,:],('rbf', k1))
        predict=kernelEval.T * multiply(labelSV,alphas[svInd]) + b
        if sign(predict)!=sign(labelArr[i]): errorCount += 1
    print( "the training error rate is: %f" % (float(errorCount)/m))
    dataArr,labelArr = loadDataSet('testSetRBF2.txt')
    errorCount = 0
    datMat=mat(dataArr); labelMat = mat(labelArr).transpose()
    m,n = shape(datMat)
    for i in range(m):
        kernelEval = kernelTrans(sVs,datMat[i,:],('rbf', k1))
        predict=kernelEval.T * multiply(labelSV,alphas[svInd]) + b
        if sign(predict)!=sign(labelArr[i]): errorCount += 1    
    print( "the test error rate is: %f" % (float(errorCount)/m))    
    
def img2vector(filename):
    returnVect = zeros((1,1024))
    fr = open(filename)
    for i in range(32):
        lineStr = fr.readline()
        for j in range(32):
            returnVect[0,32*i+j] = int(lineStr[j])
    return returnVect

#6-9
def loadImages(dirName):
    from os import listdir
    hwLabels = []
    trainingFileList = listdir(dirName)           #load the training set
    m = len(trainingFileList)
    trainingMat = zeros((m,1024))
    for i in range(m):
        fileNameStr = trainingFileList[i]
        fileStr = fileNameStr.split('.')[0]     #take off .txt
        classNumStr = int(fileStr.split('_')[0])
        if classNumStr == 9: hwLabels.append(-1)
        else: hwLabels.append(1)
        trainingMat[i,:] = img2vector('%s/%s' % (dirName, fileNameStr))
    return trainingMat, hwLabels    

def testDigits(kTup=('rbf', 10)):
    dataArr,labelArr = loadImages('trainingDigits')
    b,alphas = smoP(dataArr, labelArr, 200, 0.0001, 10000, kTup)
    datMat=mat(dataArr); labelMat = mat(labelArr).transpose()
    svInd=nonzero(alphas.A>0)[0]
    sVs=datMat[svInd] 
    labelSV = labelMat[svInd];
    print( "there are %d Support Vectors" % shape(sVs)[0])
    m,n = shape(datMat)
    errorCount = 0
    for i in range(m):
        kernelEval = kernelTrans(sVs,datMat[i,:],kTup)
        predict=kernelEval.T * multiply(labelSV,alphas[svInd]) + b
        if sign(predict)!=sign(labelArr[i]): errorCount += 1
    print( "the training error rate is: %f" % (float(errorCount)/m))
    dataArr,labelArr = loadImages('testDigits')
    errorCount = 0
    datMat=mat(dataArr); labelMat = mat(labelArr).transpose()
    m,n = shape(datMat)
    for i in range(m):
        kernelEval = kernelTrans(sVs,datMat[i,:],kTup)
        predict=kernelEval.T * multiply(labelSV,alphas[svInd]) + b
        if sign(predict)!=sign(labelArr[i]): errorCount += 1    
    print( "the test error rate is: %f" % (float(errorCount)/m)) 

六 运行结果


测试数据错误率为1.6%,正确率很高。

七 总结

比起之前使用KNN算法进行手写识别,SVM算法需要保留的样本少了很多,节省了内存的开销。




猜你喜欢

转载自blog.csdn.net/dumorgan/article/details/80385424