[Mathematical learning and the code] the least squares method - linear equation solver

     Recently read a linear algebra, as this kind of FIG. . . More annoying is the book publishing thief uncomfortable, basic definitions and theorems bold and did not add the same layout are too close, it seems once wanted to abandon the book.

 

  The focus is not here, ha ha ha ha.

A few days after reading these lines generations, we have a rough understanding after, although the food dish, but I still want and shift and shift. Prior to think I learned the least squares method, but is a second order of least squares, line and also the code. However, the learned line represents always restrain prehistoric body. . . (Go to the toilet)

  N-order linear equation as follows:

         Is then calculated according to the least squares linear equation, first write the loss function, with a sample size of sample k, n-order function (a total of n + 1 coefficients) to fit the loss function L (x i ).

  The explanation: using sample values Y I subtracting the theoretical value F (X I squared difference) is obtained. Such samples can be obtained with the square of the difference theory, our aim is to make every possible difference between the minimum, which is closer to the sample function theory.

  Now all samples required to carry out the process, and all of the L (X I ) is obtained by summing process L function.

  Then the next discuss the objectives, the goal is to minimize as much as possible L, then made in the function should be a minimum of the derivative of the position 0, but there is a multi-function, then we need to process the deflector.

Here, we first determine the amount of the unknown with a known amount, where Y I and X I is a known quantity, and A 0, A . 1, A 2 ....... A n- as unknown.

  So began the partial derivative (note the foundations of mathematics, complex function partial derivative):

         Yes, I believe we have to find the law come.

  Then the general formula is:

  Through a total of N + 1 (note that n starts from 0) equations, we modified the following equation, to comply with AX = B, Note that A, X, B is the matrix:

  You may assume:

  Then the corresponding matrix:

  So the next step is to solve linear equations!

  Solutions of the equation AX = B requires, first of all, we have the coefficient matrix equation is processed to become a lower triangular or upper triangular, such as into upper triangular, we can last data X- ' n-+. 1 · n-+. 1 × a n- = Y I X I n- , after solving the recursive processing preceding element:

  Here, the processing is performed using Gaussian elimination method, but note that Gauss elimination method using elimination on a row to row elements of the other diagonal line is not 0. Of course, you can also use other methods such as LU, Jacobi method.

  Finally, the code themselves using the C language to play, put the last posted. We take a set of data to verify the program:

 Download link: https: //files.cnblogs.com/files/inkhearts/multi_least_square_method.rar

Guess you like

Origin www.cnblogs.com/inkhearts/p/10963372.html