No, check it out

1. What does torch.optim.Adam(parameters, lr) mean

The known parameters represent the parameter matrix of each neural network , and lr represents the learning rate

2. Pytorch commonly used loss function

nn.L1Loss

nn.MSELoss()

nn.BCELoss()

nn.CrossEntropyLoss

This function is used for multi-classification, no need to add softmax layer

BCELoss is the cross-entropy used by binary classification. Before using it, you need to add the Sigmoid function in front of this layer.

pytorch commonly used loss function https://blog.csdn.net/f156207495/article/details/88658009?msclkid=06e05464c27911ec9b8a17769c0b7d6a

3. There is a very speechless thing. The test set and labels are in order without being disturbed. As a result, the test set and the verification set have no overlapping labels, and none of the labels of the verification set is predicted. The accuracy rate is 0. Service own. So when you divide the data set, don't forget to shuffle 

4. Binary Cross-Entropy as a loss function can measure the accuracy of classification. The process of reducing loss makes

  • For the sample of y=1, the predicted probability p(y) obtained becomes larger
  • For samples of y=0, the predicted probability p(y) obtained becomes smaller

Guess you like

Origin blog.csdn.net/qq_39696563/article/details/124358541