As the method of least squares and the loss function is then the difference between the mean square error (MSE)

Nichimoesha

Artificial Intelligence AI: Keras PyTorch MXNet TensorFlow PaddlePaddle deep learning practical (not regularly updated)


Loss function

The total loss is defined as:

 

  • yi is the i-th real value of training samples
  • h (xi) of the i-th training sample composition eigenvalues ​​prediction function
  • Also known as the least squares method

How to reduce this loss, so we expect some more accurate? Since the existence of this loss, we have always said machine learning function of automatic learning, linear regression here is able to reflect. Here can be some optimization methods to optimize (in fact, the mathematical derivation function among) the total return loss! ! !

2 optimization algorithm

How to seek models among the W, so as to minimize the loss? (W goal is to find the value corresponding to the minimal loss)

  • Two optimization algorithms commonly used linear regression
    • The normal equation
    • Gradient descent


Return Performance Evaluation

Mean square error (Mean Squared Error) MSE) Evaluation Mechanism:

Reflection: the difference between MSE and least squares method is?

  • sklearn.metrics.mean_squared_error(y_true, y_pred)
    • Return loss mean square error
    • y_true: true value
    • y_pred: predictive value
    • return: floating-point results

As the method of least squares and the loss function is then the difference between the mean square error (MSE)

最小二乘法作为损失函数:没有除以总样本数m
均方误差(MSE):除以总样本数m

 

 

 

 

 

Published 308 original articles · won praise 112 · views 180 000 +

Guess you like

Origin blog.csdn.net/zimiao552147572/article/details/104454450