About build neural networks

There are several pits:
First of all, loss function is really super important! With layers of different neural networks, loss function also should be changed. For example, I added a layer of that example Mo tired after the python tutorial mnist direct move down, then the correct rate plummeted, can not find the cause, to the Internet and found a little bit of code change, found that the loss function to get rid of basically correct rate with almost no training. Such rear cross entropy loss is used, and Mo is not too tired with python.
The second is, in batches school is really important, my own neural network is basically a shining Mo tired of templates, but the difference is that I used to read data is MNIST library instead TensorFlow own library so I load load in batches when he'll get an error (I do not know, obviously returns two values, I use two data match the results of error), then I'll just load training-time loading. I put online template code is also such a change, it really correct rate plummeted, trained with almost no training. But this is also well understood intuitively, after all, if a one-time thing for us to learn not to give too much feedback, learn with almost no learning. Plus a small amount of exercise time and time again in order to receive the best feedback effect.
Continued it, anyway, and now the stage is that the code can be copied coin bug, then debug days (Tan Shou)
By the way, python really hard ......

Guess you like

Origin blog.csdn.net/weixin_44288817/article/details/89044549