Summary of the skills of the deep learning framework Lasagne (2) - parameter update and non-update

If you are a person who has made many attempts in neural networks, especially if you design many structures yourself instead of copying them, then when constructing networks in practical applications, I believe that there is often such a requirement: some parameters need not to be updated.

Such scenarios may appear in trained word vectors, pre-trained network modules with parameters, and the bottom layer where reliability assignment is prone to problems.

How to achieve parameter differentiation in Lasagne?

There are two methods here:

1. Filter by assigning different labels to parameters when initializing the network parameters.
2. Modify the network parameter list directly to avoid the participation of some parameters in the update process.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325918644&siteId=291194637