Machine learning - the principle of Perceptron algorithms, methods and code implementation

1. Perception Principle algorithm

Two linearly separable pattern classes: , discriminant function is provided: .

                  

     Normalizes the sample, i.e., all multiplied by the sample type (-1 ) , there are:

                    

Perceptron learning algorithm training set of known categories, finding a formula to meet the weight vector.

2. algorithm steps

( 1 ) selecting N sub belongs and the class of the pattern constituting a sample set of training samples { X- 1 , ..., X- N } constituting augmented in vector form, and the normalization process. Take any initial value of the weight vector W is (. 1) , the iteration begins. Iterations K =. 1 .

(2) for all the training samples with an iteration, computing W is T ( K ) X- I value, and the correction weight vector. Two cases, updated weight vector values:

    ?? If    , classifier of the  i  th mode do misclassification, the weight vector is corrected to:   , C is a correction of the incremental whole.

    ?? If    , classification accuracy, weight vector  unchanged .

 Unified written:

      

(3) analysis results: As long as there is a misclassification, return to ( 2 ), until the correct classification for all samples.

  Perceptron algorithm is a punishment process:

    When the correct classification of the weight vector "reward" - here with "impunity", that is, the weight vector unchanged;

    When misclassification of weight vector "penalty" - their modification, conversion in the right direction.

  3. Code Example

# Perceptron algorithm 
Import numpy AS NP
 Import matplotlib.pyplot AS PLT 

X0 = np.array ([[. 1 , 0], 
              [0, . 1 ], 
              [ 2 , 0], 
              [ 2,2 & ]]) 
the X1 = NP. Array ([[-. 1, -1 ], 
              [ -1 , 0], 
              [ -2, -1 ], 
              [0, -2 ]]) 

# sample data into the augmented vector matrix 
ones = -np.ones ((X0.shape [0],. 1 )) 
X0 = np.hstack ((ones, X0)) 
ones = -np.ones ((X1.shape [0],. 1 ))
The X1 = np.hstack ((ones, the X1)) 

# samples normalizes 
X-np.vstack = ((- X0, the X1)) 
plt.grid () 
plt.scatter (X0 [:, . 1], X0 [: , 2], C = ' R & lt ' , marker = ' O ' , S = 500 ) 
plt.scatter (the X1 [:, . 1], the X1 [:, 2], C = ' G ' , = marker ' * ' , 500 = S ) 
W is = np.ones ((X.shape [. 1],. 1 )) 

in Flag = True
 the while (in Flag): 
    in Flag = False
     for I in range(len(X)):
        x = X[i,:].reshape(-1,1)    
        if np.dot(W.T,x)<=0:
            W = W + x
            flag = True
p1=[-2.0,2.0]
p2=[(W[0]+2*W[1])/W[2],(W[0]-2*W[1])/W[2]]
plt.plot(p1,p2)
plt.show()
     

Output:

  

 

Guess you like

Origin www.cnblogs.com/lsm-boke/p/12213023.html