[Machine] MATLAB neural network learning notes

MATLAB neural network notes

keywords: single perceptron error surface in FIG BP neural network fitting curve

  1. Design of a single layer perceptron network, for a given sample input vector P = (-0.5-0.20.10.2; 0.70.60.40.8), the target vector T = (1 1 0 0) and the need for classification of the input vector set P = (-0.70.3-0.60.1; 0.50.4-0.30.6) classification.

= P [- -0.2 0.5 0.1 0.2; 0.7 0.6 0.4 0.8];
T = [. 1. 1 0 0];
P_test = [- 0.3 -0.6 0.7 0.1; 0.5 0.4 -0.3 0.6];
plotpv (P, T); # Scatter
NET = newp (minmax§,. 1);
the Y = SIM (NET, P);
net.trainParam.epochs = 20 is;
NET = Train (NET, P, T);
plotpc (net.iw {}. 1, net.b {1}) # sorting line
Y = sim (net, P_test) # forecast

Y =

 1     0     1     0

Scatter
Here Insert Picture Description
after 6 cycles of convergence
Here Insert Picture Description

Sorting line
Here Insert Picture Description
inputs 2 linear neural network is P = (1.1 - 1.3), the target is T = (0.61), 500 times the number of training, the learning rate is 0.01, implemented in Matlab programming, and the error surface shown in FIG.

clear all;
P=[1.1 -1.3];
T=[-0.6 1];
net= newlin(minmax§,1,0,0.01);
net=init(net);
net.trainParam.epochs= 500;
net = train(net,P,T);
A=sim(net,P);
A

A =

-0.6000 1.0000

E=T-A

E =

1.0E-06 *

-0.6827 0.0415

Sumsqr ESO = (E)

SSE =

4.6781e-13

SSE

SSE =

4.6781e-13

wrange= -1:0.1:1;
brange = -1 :0.1:1;
ES=errsurf(P,T,wrange,brange,‘purelin’);
plotes(wrange,brange,ES);
plotep(net.iw{1,1},net.b{1},SSE);
net.iw{1,1}

Here Insert Picture Description
Here Insert Picture Description
3. Design of a neural network BP curve fitting. Known input vector is P = (-1 - 0.9 - 0.8 - 0.7 - 0.6-0.5-0.4-0.3-0.2-0.100.10.20.30.40.50.60.70.80.9); output vector is T = (-0.832-0.423 - 0.0240.3441.2823.4564.023.2322.1021.5040.2481.2422.3443.2622.0521.6841.0222.2243.0221.984). Try different transfer functions and training functions of the training times to achieve the same goal of training error to compare.

= P [-. 1 -0.9 -0.8 -0.7 -0.6 -0.5 -0.4 -0.3 -0.2 -0.1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9];
T = [- 0.832 -0.423 0.344 -0.024 1.282 3.456 4.02 3.232 2.102 0.248 1.242 2.344 3.262 2.052 1.504 1.684 1.022 2.224 3.022 1.984];
NET = newff (minmax§, [10,1], { 'tansig' 'purelin'}, 'trainlm');
net.trainParam.epochs = 200 is;
NET. . 8 = 1E-trainParam.goal;
net.trainParam.min_grad = 1E-20 is;
net.trainParam.show = 200 is;
net.trainParam.time = INF;
NET = Train (NET, P, T);
P_test = -1: 0.1:. 1;
X-= SIM (NET, P_test);
Plot (P_test, X-, 'BO');
HOLD ON
Plot (P, T, 'R & lt +');
title ( '+ is a real number, o is the predicted value' )

Here Insert Picture Description
Here Insert Picture Description
For a transfer function logsig
after 60 iterations required convergence

net = newff(minmax§,[10,1],{‘logsig’ ‘purelin’},‘trainlm’);
net = train(net,P,T);
Here Insert Picture Description
Here Insert Picture Description

Guess you like

Origin blog.csdn.net/yao09605/article/details/84638483