How to use the neural network in matlab, how to build a neural network in matlab

1. How to use matlab to build bp neural network

net=train(net, p, t); Change this sentence to net=train(net, p', t'); try, matlab should use column vectors by default. Or directly use the graphical interface provided by matlab to take training, enter nnstart on the command line

Google AI Writing Project: Little Fat Cat

2. (Urgent) How to use MATLAB to build ANN (artificial neural network model)?

Description of the problem:
There are two independent variables, one dependent variable, and 10 samples (just take a little less here) How does matlab build a neural network . Expressed in practical terms, suppose a stock has an opening price of x1, a closing price of x2, and the stock price of the next day y. The purpose of using the neural network to predict is to predict the future stock price based on the 10-day opening and closing prices. Obviously, y here is related to x1 and x2, and we need to train a network (net) to let him predict a y as much as possible
MATLAB program
clc
clear
load data input output
%input is a matrix containing 10 days of data of x1 and x2, To put it bluntly, it is a matrix of 20 numbers. output is a vector of y, %10 numbers
% need to find some data to assign to input and ouput
P=input;
T=output;
% here P and T must be a combination of row vectors of x1 x2 and y. For P, x1 is a row vector and x2 is a row vector. P=[x1;x2]; T=y. y is a row vector
Epochs=5000;
NodeNum=12; TypeNum=1;
TF1='logsig'; TF2='purelin';
%Set some initial parameters, Epochs is the upper limit of iteration Times, NodeNum is the number of neurons in the first hidden layer, %TypeNum is the number of layers. TF1 and TF2 define several transfer functions respectively.
net=newff(minmax(P),[NodeNum TypeNum],{TF1 TF2},'trainlm');
%Build a neural network, training input and output data are available, set the number of hidden layers.
net.trainParam.epochs=Epochs;
net.trainParam.goal=1e-4;
net.trainParam.min_grad=1e-4;
net.trainParam.show=200;
net.trainParam.time=inf;
% set some training time Parameters, the first is the maximum number of iterations for each training;
net=train(net,P,T);
% start network training
P_test=P;
B_test=T;
% use the original data for testing
X=sim(net ,P_test);
% test
Erro=abs(B_test-X);
sigma=std(Erro);
% Calculate the error between the predicted value and the actual value, and find the variance. The future variance can be used to randomly adjust

3. How to build an elman neural network in matlab?

t=1:20;
p1=sin(t);
p2=sin(t)*2;
plot(t,p1,'r');
hold on
plot(t,p2,'b--');
hold on
t1=ones(1,20);t2=ones(1,20)*2;% Generate two sets of vectors, which are the amplitudes of these two waveforms, as the output vector
p=[p1 p2 p1 p2];
t=[t1 t2 t1 t2];
Pseq=con2seq(p);%Convert the training sample in the form of matrix to the form of sequence
Tseq=con2seq(t);
R=1;%The number of input elements is 1
S2=1;%Output Zeng The number of neurons is 1
S1=10;% There are 10 neurons in the middle layer
net=newelm([-2,2],[S1,S2],{'tansig','purelin'});
net.trainParam. epochs=100;% set times
net=train(net,Pseq,Tseq);
y=sim(net,Pseq);
%prediction
P=randn(12,2);T=randn(12,2);
threshold= [0 1;0 1;0 1;0 1;0 1;0 1;0 1;0 1;0 1;0 1;0 1;0 1]; a=[11 17 23]; for i
=
1 :3
net=newelm(threshold,[a(i),4],{'tansig','purelin'});
net.trainParam.epochs=1000;
net=init(net);
net=train(net,P,T);
y=sim(net,p_test);
error(i,:)=y'-t;
end
hold off;
plot(1:4,error(1,:));
hold on;
plot(1:4,error(2,:),'-.');
hold on;
plot(1:4,error(3,:),'--');
hold off;

4. How to use matlab to build a three-layer bp neural network model for predicting temperature.

Section 0, Citation
This paper uses Fisher's Iris data set as the test data set of the neural network program. The Iris dataset can be found at . Here is a brief introduction to the Iris data set:
there are a batch of Iris flowers, which are known to be divided into 3 varieties, and now need to be classified. The sepal length, sepal width, petal length and petal width of different varieties of Iris flowers will be different. We currently have a batch of data on sepal length, sepal width, petal length, and petal width of Iris flowers of known varieties.
One solution is to use existing data to train a neural network to be used as a classifier.
If you just want to use C# or Matlab to quickly implement the neural network to solve the problem at hand, or already understand the basic principles of the neural network, please skip directly to the second section - neural network implementation.
Section 1. Basic principles of neural network
1. Artificial Neuron (Artificial Neuron) model Artificial neuron is
the basic element of neural network. The input signal from other neurons, wij represents the connection weight from neuron j to neuron i, and θ represents a threshold (threshold), or bias (bias). Then the relationship between the output and input of neuron i is expressed as: yi in the figure represents the output of neuron i, function f is called activation function (Activation Function) or transfer function (Transfer Function), net is called net activation (net activation) . If the threshold is regarded as the weight wi0 of an input x0 of neuron i, the above formula can be simplified as: if X is used to represent the input vector, and W is used to represent the weight vector, that is: X = [ x0 , x1 , x2 , ...... , xn ]







Then the output of the neuron can be expressed in the form of vector multiplication:
if the net activation net of the neuron is positive, the neuron is said to be in the activated state or excited state (fire); if the net activation net is negative, the neuron is said to be in the state of fire. Inhibited state.
The "threshold weighted sum" neuron model in Figure 1 is called the MP model (McCulloch-Pitts Model), also known as a processing unit (PE, Processing Element) of the neural network.
2. Commonly used activation functions
The selection of activation functions is an important link in the process of building a neural network. The following briefly introduces commonly used activation functions.
(1) Linear Function (Liner Function)

(2) Ramp Function

(3) Threshold Function (Threshold Function)
The above three activation functions are all linear functions. Two commonly used non-linear activation functions are introduced below.
(4) Sigmoid Function (Sigmoid Function)
Derivative function of this function:
(5) Bipolar Sigmoid function
Derivative function of this function:
The images of Sigmoid function and bipolar Sigmoid function are as follows:
Figure 3. Sigmoid function and The bipolar sigmoid function image
The main difference between the bipolar sigmoid function and the sigmoid function is the value range of the function. The value range of the bipolar sigmoid function is (-1,1), while the value range of the sigmoid function is (0,1) .
Since both the sigmoid function and the bipolar sigmoid function are derivable (the derivative function is a continuous function), they are suitable for use in BP neural networks. (BP algorithm requires the activation function to be derivable)
specific

5. Matlab implements neural network 5

tup1 = ('physics', 'chemistry', 1997, 2000);
tup2 = (1, 2, 3, 4, 5 );
tup3 = "a", "b", "c", "d";

6. Use MATLAB to build a bp neural network model, seek masters, online, etc.

The Matlab Neural Network Toolbox provides a series of function commands for establishing and training the bp neural network model, which is difficult to describe all at once. The following is just an example to list some usages of some functions. For more functions and usage, please refer to the help documentation of Neural Network Toolbox.
Example: Use the bp neural network model to establish a model of z=sin(x+y) and test the effect
% Step 1. Randomly generate 200 sampling points for training
x=unifrnd(-5,5,1,200);
y=unifrnd(-5,5,1,200);
z=sin(x+y);
%Step 2. Build a neural network model. The first parameter is the range of input data, the second parameter is the number of neurons in each layer, and the third parameter is the transfer function type of each layer.
N=newff([-5 5;-5 5],[5,5,1],{'tansig','tansig','purelin'});
% Step 3. train. The batch training function train is used here. The adapt function can also be used for growth training.
N=train(N,[x;y],z);
%Step 4. Check the training results.
[X,Y]=meshgrid(linspace(-5,5));
Z=sim(N,[X(:),Y(:)]');
figure
mesh(X,Y,reshape(Z,100,100) );
hold on;
plot3(x,y,z,'.')

7. matlab neural network

To ask this question, I assume that you have a certain understanding of neural networks, so the simple answer is as follows:
The function newff builds a trainable feedforward network.
The function newrb builds a radial basis network.
The function newlvq builds a vector quantized neural network.
If you don’t understand what this neural network is, you’d better find a book on intelligent algorithms.

8. About the BP neural network of matlab:

Newer versions, such as those above matlab 2010, do not need to install the neural network toolbox
Steps to build the network:
1. Data normalization: the input data is usually P, the output data is usually T, and the data format is: per A column corresponds to a sample, normalized common function: mapminmax
[pn,ps]=mapminmax(p); [tn,ts]=mapminmax(t)
pn, tn are normalized data, ps, ts are normalized Normalized structure, it is useful to denormalize the predicted value later.
2. Create a network and set the parameters
net=newff(pn,tn,[ ]) The brackets are the number of input layers, the number of hidden neurons, the number of output layers, and you can also set the parameters net of the node transfer function,
etc. .trainparam.epochs=1000 times of training
net.trainparam.goal=0.0001 error target value of training
net.trainparam.lr=0.1 learning rate, usually between 0 and 1, too big or too small is not
good Analysis
an=sim(net, pn)
ouput=mapminmax('reverse', an, ts) According to the previous normalized standard, denormalize the prediction results and get the result error
=output-t Here is the error Output, you can also use error=sum(asb(output-t)),
of course you can also draw, for example:
plot(p,t,'-o')
hold on
plot(p, output,'-*')
to see Whether the predicted value and the actual value are consistent
You can also view the MSE and R square in the dialog box after the neural network training is completed.
There are many ways to improve the accuracy of the neural network. The above program has not been debugged by MATLAB, but the general process is as above.
Pure hand-made, I hope to adopt it!

Guess you like

Origin blog.csdn.net/wenangou/article/details/127404384