Perceptron and BP neural network applications

Disclaimer: This article is a blogger original article, follow the CC 4.0 BY-SA copyright agreement, reproduced, please attach the original source link and this statement.
This link: https://blog.csdn.net/WWQ0726/article/details/101697953

A perceptron
1. --newp generating a network
call format: net = newp (pr, s , tf, lf)
Description: perceptron for generating linearly separable problem can be solved
Parameters: PR: given a r the maximum and minimum values of input variables of the matrix r * 2
s: the number of neurons tf: transfer function, may be 'hardlim' or 'hardlims', the default is 'hardlim' lf: a learning function, you can be 'learnp' or 'learnpn', the default is 'learnp'
nET: generating a perceptron
NOTE: generating initial network weights and threshold values are 0, in order to change the initial weights and thresholds may be used assignment
net.IW {...} = { net.b = {...} ...} = {...}
2. --init network initialization
call format: net = init (net)
description: the recovery of the weights of the neural network back to the original value and the threshold value or the weight value of the network and imparting a random number threshold
weights entering the following command can change the threshold value of the network is a random number:
net.inputweights .initFcn = {1,1} 'Rands'; net.biases .initFcn = {}. 1 'Rands'; nET. the init (nET);
3. neural network simulation --sim
call format: Y = sim (net, P );
functional description: neural network simulation tool for verifying network training effect
Parameters: net- neural network has been generated P- Y- network input output network
4. The learning rule --learnp
Perceptron learning rule as follows: Let the input vector is p, the corresponding expected output is t, the corresponding network output is a. Weights and the threshold value correction formula difference between a desired value and an actual output value of the error e = ta called learning, perceptron is △ w (i, j) = [t (i) -a (i)] * p (j) = e (i) * p ( j), △ b (i) = e (i) where i = 1,2,3, ..., S, j = 1,2,3, ..., new weights and R threshold W (I, J) = W (I, J) + △ W (I, J), B (I) = B (I) + △ B (I)
learnp dW = learnp (W is, P, the Z, N, a, T, E, D, gW, gA, the LP, the LS)
dW: delta matrix weight or threshold value of W: weight matrix or threshold vector P: input vectors T: target vector E: error vector other can ignored, can be referred to as []
the network --train trained
by repeatedly using the SIM () and Learn () adjust the perceptron weights and threshold functions that meet the requirements of the network output error, finally find the optimal weights and the threshold value, a process called network training. Complete network training function is called a train ().
Call format: [net, tr, Y, E] = train (net, p, t)
Function Description: Used to train the network, to achieve optimum weights and threshold
parameters: input parameters
net- neural network has been generated neural network input vector p- t- desired output of the neural network
output parameters of
the network after the training of the neural net- tr- trained network output E- Y- recording error rate
view and verify the network training results: >> net.IW [ 1, 1] weights of the neural network after the training view% = -2 -3 ANS
>> net.b [. 1] View% threshold after training the neural network. 1 = ANS
>> = a SIM (nET, P)% See training results a = 0 1 0 1
***** >> plotpv (p, t)% training data plotted in a coordinate system

plotpc (net.IW [1,1], net.b [1])% separation line drawn
adapt: Adapt adaptive training function = NET (NET, P, T)
adaptParam.passes =. 3; // determined training during repetitions
see (E)% function used to determine the error E
[net, Y, E] = adapt (net, P, T);% adjusted using the neural network input sample
linehandle = plotpc (net.IW {1 }, net.b {1}, linehandle ); after adjusting sorting line drawn%
DrawNow;% delay period
two .BP neural network ******
Here Insert Picture Descriptionpicture description] (https: //img-blog.csdnimg .cn / 20190929165918377.jpg? x-oss- process = image / watermark, type_ZmFuZ3poZW5naGVpdGk, shadow_10, text_aHR0cHM6Ly9ibG9nLmNzZG4ubmV0L1dXUTA3MjY =, size_16, color_FFFFFF, t_70)
Here Insert Picture Description

III. The following are the results
Here Insert Picture Description
Here Insert Picture Description
Here Insert Picture Description

Guess you like

Origin blog.csdn.net/WWQ0726/article/details/101697953