[Deep Learning] Is deep learning based on backpropagation?

Not all are based on backpropagation. Generally speaking, there are two methods of training weights: the
first is based on numerical differentiation (such as gradient descent algorithm)
. The second is based on backpropagation
, which has the advantage of high efficiency and fast calculation speed, but the disadvantage is that it is more complicated to implement.

For example, there is no training process in the verification set and test set test model, and there should be no backpropagation

Guess you like

Origin blog.csdn.net/weixin_40293999/article/details/129750162