Why we need activation function?

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/Solo95/article/details/84450070

整理自吴恩达深度学习课程 https://mooc.study.163.com/learn/2001281002?tid=2001392029#/learn/content?type=detail&id=2001702018&cid=2001694026

Why we need activation function ?

Deep networks with many many layers, many many hidden layers and turns out that if you use a linear activation function or alternatively if you don’t have an activation function then no matter how many layers your network has, always doing is computing a linear activation function, so you might as well not have any hidden layers.

If you use the linear function here and sigmoid function here, then this model is no more expressive than standard logistic regression without any hidden layer.

The take-home is that a linear hidden layer is more or less useless,because the composition of
two linear function is itself a linear function.
So unless you throw a non-linearty in there, then
you are not computing more interesting functions even as you good deeper in the network

I, blogger of this post, think that if you are suing linear function, you deep neural network will not get the higher level features we expect.

The hidden units should not use the linear activation functions, they could use ReLU or tanh or leaky ReLU or maybe somethings else. So the one place you might use as linear activation function is usually the output layer. But other that, using a linear activation function in a hidden layer except for some very special circumstances relating to compression. Using linear activation is extremely rare.

猜你喜欢

转载自blog.csdn.net/Solo95/article/details/84450070