【读书1】【2017】MATLAB与深度学习——ReLU函数(1)

ReLU函数(ReLU Function)

本节通过实例介绍ReLU函数。

This section introduces the ReLU functionvia the example.

DeepReLU函数利用反向传播算法对给定的深度神经网络进行训练。

The function DeepReLU trains the given deepneural network using the back-propagation algorithm.

通过采用网络权重和训练数据,返回训练后的权重。

It takes the weights of the network andtraining data and returns the trained weights.

[W1,W2, W3, W4] = DeepReLU(W1, W2, W3, W4, X, D)

其中W1、W2、W3和W4分别是输入层-隐藏层1、隐藏层1-隐藏层2、隐藏层2-隐藏层3、隐藏层3-输出层之间的权值。

where W1, W2, W3, and W4 are weightmatrices of input-hidden1, hidden1-hidden2, hidden2-hidden3, and hidden3-outputlayers, respectively.

X和D分别是训练数据的输入和正确输出矩阵。

X and D are input and correct outputmatrices of the training data.

以下显示了DeepReLU.m文件的代码清单,该代码实现了DeepReLU函数功能。

The following listing shows the DeepReLU.m file,which implements the DeepReLU function.

function [W1, W2, W3, W4] = DeepReLU(W1,W2, W3, W4, X, D)
alpha = 0.01;
N = 5;
for k = 1:N
x = reshape(X(:, :, k), 25,1);
v1= W1*x;
y1= ReLU(v1);

          v2= W2*y1;

          y2 = ReLU(v2);

          v3 = W3*y2;

          y3 = ReLU(v3);

          v = W4*y3;

          y = Softmax(v);

          d = D(k, :)';

          e = d - y;

          delta = e;

          e3 = W4'*delta;

          delta3 = (v3 > 0).*e3;

          e2 = W3'*delta3;

          delta2 = (v2 > 0).*e2;

          e1 = W2'*delta2;

          delta1 = (v1 > 0).*e1;

          dW4 = alpha*delta*y3';

          W4= W4 + dW4;

          dW3= alpha*delta3*y2';

          W3= W3 + dW3;

          dW2= alpha*delta2*y1';

          W2= W2 + dW2;

          dW1= alpha*delta1*x';

          W1= W1 + dW1;

   end

end

该代码导入训练数据,使用增量规则计算权重更新(dW1、dW2、dW3和dW4),并调整神经网络的权重。

This code imports the training data,calculates the weight updates (dW1, dW2, dW3, and dW4) using the delta rule,and adjusts the weight of the neural network.

至此,该过程与以前的训练代码相同。

So far, the process is identical to theprevious training codes.

唯一的差别是隐藏节点采用ReLU函数,而不是sigmoid函数。

It only differs in that the hidden nodesemploy the function ReLU, in place of sigmoid.

当然,使用不同的激活函数会使得相应的导数发生变化。

Of course, the use of a differentactivation function yields a change in its derivative as well.

现在,让我们来看看DeepReLU调用的ReLU函数。

Now, let’s look into the function ReLU thatthe function DeepReLU calls.

ReLU.m文件中的ReLU函数代码清单如下。

The listing of the function ReLU shown hereis implemented in the ReLU.m file.

由于这里只是对ReLU进行定义,略去了进一步的讨论。

As this is just a definition, furtherdiscussion is omitted.

function y =ReLU(x)

   y = max(0, x);

end

后向传播代码采用后向传播算法调整权值。

Consider the back-propagation algorithmportion, which adjusts the weights using the back-propagation algorithm.

以下代码为DeepReLU.m文件中的增量计算部分。

The following listing shows the extract ofthe delta calculation from the DeepReLU.m file.

——本文译自Phil Kim所著的《Matlab Deep Learning》

更多精彩文章请关注微信号:在这里插入图片描述

猜你喜欢

转载自blog.csdn.net/weixin_42825609/article/details/83986585