How does the neural network see the training effect, how does the neural network make predictions

How to use the trained neural network to make predictions

Google AI Writing Project: Neural Network Pseudo-Original

How to realize the regression prediction of continuous variables with neural network?

Neural network was originally a model of machine learning, but its training time is not advantageous compared with other models, and the results are not satisfactory, so it has not been widely used .

However, with the in-depth study of mathematics and the improvement of the quality of computer hardware, especially the emergence of GPU, it has provided a basis for the wide application of deep learning.

The GPU was originally designed to bring high-quality visual experience to gamers. Because of its excellent ability to handle matrix operations, it is also used for model training in deep learning. In the past, a model that could only be trained in dozens of days was trained several times on the GPU. It can be trained in one day, which greatly reduces the training time of deep learning, so the application of deep learning is increasing.

Neural network is the most important model of deep learning. Artificial neural network (ANN) is the most basic neural network structure, and its working principle is very similar to the nerves in the human brain.

A neuron is the working unit of ANN. Each neuron contains weights and biases. The neurons pass the values ​​passed by the neurons of the previous layer through weight and bias calculations to obtain new results, and pass the results to the next layer. Layer neurons, through continuous transmission, finally obtain the output result.

In order to use the neural network to realize the regression prediction of continuous variables, the data of the N-dimensional variable needs to be used as input, and the hidden layer and the number of neurons in each layer are set in the middle. As for the number of hidden layers, multiple trainings are required. In order to get a more accurate number of layers.

However, there will be errors between the value of the final output layer and the value of the actual variable. The neural network will continuously train and change the values ​​of weights and biases to make the error as small as possible. When the error is small to a certain extent, the regression of the neural network Even if the prediction is successful.

Python is usually used to build a neural network. Python comes with some libraries for deep learning. When performing regression prediction, we only need to call the function and set a few parameters, such as the number of hidden layers and the number of neurons, etc., and the rest The only thing is to wait for the model to train itself, and finally complete the regression prediction, which is very convenient.

How does matlab use neural network to make predictions

How Artificial Neural Networks Can Predict the Next Value

The newff function establishes a BP neural network, and historical data is used as a sample, for example, the first n data are used as input, and the input node is n. The current data is taken as p, and the output node is 1. Hidden layer nodes are obtained by trial and error.

Through the train function of matlab, the trained BP neural network is obtained. Then the first n data of the current prediction point are used as input, and the output is the current prediction value.

Use RBF neural network to make predictions

Type nntool in the command bar, follow the prompts, and submit the sample. There is also a relatively simple way to use the generalized RBF network, which can be realized directly with the grnn function. The basic form is y=grnn(P,T,spread). You can use help grnn to see the specific usage.

The prediction accuracy of GRNN is good. Generalized RBF network: From the input layer to the hidden layer is equivalent to mapping the data in the low-dimensional space to the high-dimensional space. The number of cells in the input layer is the dimension of the sample, so the number of cells in the hidden layer must be more than the number of cells in the input layer. .

From the hidden layer to the output layer is the process of linearly classifying data in high-dimensional space, and the learning rules commonly used by single-layer perceptrons can be used, see Neural Network Basics and Perceptrons.

Note that the generalized RBF network only requires that the number of neurons in the hidden layer be greater than the number of neurons in the input layer, and does not require that it is equal to the number of input samples. In fact, it is much less than the number of samples.

Because in the standard RBF network, when the number of samples is large, many basis functions are needed, the weight matrix will be large, the calculation is complex and prone to ill-conditioned problems.

In addition, the wide RBF network has the following differences compared with the traditional RBF network: 1. The center of the radial basis function is no longer limited to the input data point, but is determined by the training algorithm. 2. The expansion constants of each radial basis function are no longer unified, but are determined by the training algorithm.

3. The linear transformation of the output function contains a threshold parameter, which is used to compensate the difference between the average value of the base function on the sample set and the target value.

Therefore, the design of the generalized RBF network includes: 1. Structural design - the hidden layer contains several suitable nodes. 2. Parameter design - the data center of each basis function, the expansion constant, and the weight of the output node.

BP neural network completes prediction 5

The following are several simulation experiments, using different training functions: 1. Create the learning function of the BP network, the training function and the performance function both use default values, which are the approximation results of learngdm, trainlm and mse respectively: it can be seen that the After 200 times of training, although the performance of the network is not 0, the output mean square error is already very small, MSE=6.72804e-0.06, and the displayed results also prove that the fitting of the nonlinear mapping relationship between P and T is Very accurate; 2. Establish a BP network whose learning function is learnd, training function is traind, and performance function is msereg to complete the fitting task: it can be seen that after 200 times of training, the output error of the network is relatively large, and the network The convergence rate of the error is very slow.

This is because the training function traind is a pure gradient descent training function, the training speed is relatively slow, and it is easy to fall into the local minimum situation. The results show that the network accuracy is indeed relatively poor.

3. Modify the training function to traindx, the i function is also a training function of the gradient descent method, but during the training process, its learning rate is variable. After 200 trainings, the network performance evaluated by the msereg function is 1.04725, which has been Not very large, the results show that the fitting of the nonlinear relationship between P and T is good, and the performance of the network is good.

How matlab neural network multi-step forecast, rolling forecast

 

Guess you like

Origin blog.csdn.net/Supermen333/article/details/127486815