Handwritten batch linear regression algorithm: using a gradient descent method implementation in model training in Python3

Author: Tarun Gupta
deephub translation Group: Meng Xiangjie

In this article, we will look to use NumPy as a program written in Python3 data processing library to learn how to implement (batch) linear regression using the gradient descent method.

I will gradually explain the working principle and the principle of each part of the code of the code.

We will use this formula to calculate the gradient.

Here, x (i) is a vector of a point, where N is the size of the data set. n (eta) is our learning rate. y (i) is a target output vector. f (x) is defined as the vector f (x) = Sum (w * x) of a linear regression function, where sigma is the sum function. Further, we will consider the initial difference w0 = 0 and such that x0 = 1. All weights are initialized to 0.

In this method, we used the sum of the squared error loss function.

Except that SSE is initialized to zero, we will record the changes in each iteration of the SSE, and compared with a threshold value provided prior to program execution. If SSE is below the threshold, the program exits.

In this program, we provide three input from the command line. they are:

threshold - the threshold value, before the algorithm terminates, the loss must be below this threshold.

Data - data set position.

learningRate - learning rate gradient descent method.

Therefore, the start of the program should look like this:

python3linearregr.py — datarandom.csv — learningRate 0.0001 — threshold 0.0001

Before diving into the code we identified last thing, the program's output would be as follows:

iteration_number,weight0,weight1,weight2,...,weightN,sum_of_squared_errors

The program includes six parts, one by one we see.

Import module

import argparse # to read inputs from command lineimport csv # to read the input data set fileimport numpy as np # to work with the data set

Initialization part

# initialise argument parser and read argumentsfrom command line with the respective flags and then call the main() functionif __name__ == '__main__':    
    parser = argparse.ArgumentParser()
    parser.add_argument("-d", "--data", help="Data File")
    parser.add_argument("-l", "--learningRate", help="Learning Rate")    
    parser.add_argument("-t", "--threshold", help="Threshold")    
    main()

main function part

defmain():
    args = parser.parse_args()
    file, learningRate, threshold = args.data, float(
        args.learningRate), float(args.threshold) # save respective command line inputs into variables# read csv file and the last column is the target output and is separated from the input (X) as Ywith open(file) as csvFile:
        reader = csv.reader(csvFile, delimiter=',')
        X = []
        Y = []
        for row in reader:
            X.append([1.0] + row[:-1])
            Y.append([row[-1]])

    # Convert data points into float and initialise weight vector with 0s.
    n = len(X)
    X = np.array(X).astype(float)
    Y = np.array(Y).astype(float)
    W = np.zeros(X.shape[1]).astype(float)
    # this matrix is transposed to match the necessary matrix dimensions for calculating dot product
    W = W.reshape(X.shape[1], 1).round(4)

    # Calculate the predicted output value
    f_x = calculatePredicatedValue(X, W)

    # Calculate the initial SSE
    sse_old = calculateSSE(Y, f_x)

    outputFile = 'solution_' + \
                 'learningRate_' + str(learningRate) + '_threshold_' \
                 + str(threshold) + '.csv''''
        Output file is opened in writing mode and the data is written in the format mentioned in the post. After the
        first values are written, the gradient and updated weights are calculated using the calculateGradient function.
        An iteration variable is maintained to keep track on the number of times the batch linear regression is executed
        before it falls below the threshold value. In the infinite while loop, the predicted output value is calculated 
        again and new SSE value is calculated. If the absolute difference between the older(SSE from previous iteration) 
        and newer(SSE from current iteration) SSE is greater than the threshold value, then above process is repeated.
        The iteration is incremented by 1 and the current SSE is stored into previous SSE. If the absolute difference 
        between the older(SSE from previous iteration) and newer(SSE from current iteration) SSE falls below the 
        threshold value, the loop breaks and the last output values are written to the file.
    '''with open(outputFile, 'w', newline='') as csvFile:
        writer = csv.writer(csvFile, delimiter=',', quoting=csv.QUOTE_NONE, escapechar='')
        writer.writerow([*[0], *["{0:.4f}".format(val) for val in W.T[0]], *["{0:.4f}".format(sse_old)]])

        gradient, W = calculateGradient(W, X, Y, f_x, learningRate)

        iteration = 1whileTrue:
            f_x = calculatePredicatedValue(X, W)
            sse_new = calculateSSE(Y, f_x)

            if abs(sse_new - sse_old) > threshold:
                writer.writerow([*[iteration], *["{0:.4f}".format(val) for val in W.T[0]], *["{0:.4f}".format(sse_new)]])
                gradient, W = calculateGradient(W, X, Y, f_x, learningRate)
                iteration += 1
                sse_old = sse_new
            else:
                break
        writer.writerow([*[iteration], *["{0:.4f}".format(val) for val in W.T[0]], *["{0:.4f}".format(sse_new)]])
    print("Output File Name: " + outputFile

Process main function is as follows:

  1. The corresponding command line in the input to a variable
  2. Reading the CSV file, the last one is a target output, and the input (stored as X) and stored as separate Y
  3. Convert the data point to floating point initialization vector of weights for the 0s
  4. Use calculatePredicatedValue function to calculate the predicted value of the output
  5. SSE function used to calculate the initial calculateSSE
  6. The output file is opened in write mode, data is written in the format mentioned in the article. After writing the first value, and using the gradient function calculating calculateGradient updated weight. Variable number of iterations performed to determine the linear regression performed in batch before loss function values ​​below a threshold. While in an infinite loop, again calculated predicted output value, and calculating a new value of SSE. If the old (previous iteration from SSE) and the newer the absolute difference between (SSE from the current iteration) is greater than the threshold value, then repeat the process. Increase the number of iterations 1, SSE is currently stored in previous SSE. If the older (previous iteration of SSE) and the newer the absolute difference between the (current iteration of SSE) is below the threshold, the cycle is interrupted, and the final output value written to the file.

calculatePredicatedValuefunction

Here, the product is calculated by performing the predicted output and input matrix X W is the weight matrix points.

# dot product of X(input) and W(weights) as numpy matrices and returning the result which is the predicted outputdefcalculatePredicatedValue(X, W):
    f_x = np.dot(X, W)
    return f_x

calculateGradientfunction

Gradient calculated using a formula first mentioned in the article and update the weights.

defcalculateGradient(W, X, Y, f_x, learningRate):
    gradient = (Y - f_x) * X
    gradient = np.sum(gradient, axis=0)
    temp = np.array(learningRate * gradient).reshape(W.shape)
    W = W + temp
    return gradient, W

calculateSSEfunction

SSE is calculated using the above equation.

defcalculateSSE(Y, f_x):    
    sse = np.sum(np.square(f_x - Y))     
    return sse

Now, read the complete code. Let's look at the results of the implementation of the program.

This is like the output:
00.00000.00000.00007475.31491-0.0940-0.5376-0.25922111.51052-0.1789-0.7849-0.3766880.69803-0.2555-0.8988-0.4296538.86384-0.3245-0.9514-0.4533399.80925-0.3867-0.9758-0.4637316.16826-0.4426-0.9872-0.4682254.51267-0.4930-0.9926-0.4699205.84798-0.5383-0.9952-0.4704166.69329-0.5791-0.9966-0.4704135.029310-0.6158-0.9973-0.4702109.389211-0.6489-0.9978-0.470088.619712-0.6786-0.9981-0.469771.794113-0.7054-0.9983-0.469458.163114-0.7295-0.9985-0.469147.120115-0.7512-0.9987-0.468938.173816-0.7708-0.9988-0.468730.926117-0.7883-0.9989-0.468525.054418-0.8042-0.9990-0.468320.297519-0.8184-0.9991-0.468116.443820-0.8312-0.9992-0.468013.321821-0.8427-0.9993-0.467810.792522-0.8531-0.9994-0.46778.743423-0.8625-0.9994-0.46767.083324-0.8709-0.9995-0.46755.738525-0.8785-0.9995-0.46744.649026-0.8853-0.9996-0.46743.766327-0.8914-0.9996-0.46733.051228-0.8969-0.9997-0.46722.471929-0.9019-0.9997-0.46722.002630-0.9064-0.9997-0.46711.622431-0.9104-0.9998-0.46711.314432-0.9140-0.9998-0.46701.064833-0.9173-0.9998-0.46700.862634-0.9202-0.9998-0.46700.698935-0.9229-0.9998-0.46690.566236-0.9252-0.9999-0.46690.458737-0.9274-0.9999-0.46690.371638-0.9293-0.9999-0.46690.301039-0.9310-0.9999-0.46680.243940-0.9326-0.9999-0.46680.197641-0.9340-0.9999-0.46680.160142-0.9353-0.9999-0.46680.129743-0.9364-0.9999-0.46680.105144-0.9374-0.9999-0.46680.085145-0.9384-0.9999-0.46680.069046-0.9392-0.9999-0.46680.055947-0.9399-1.0000-0.46670.045348-0.9406-1.0000-0.46670.036749-0.9412-1.0000-0.46670.029750-0.9418-1.0000-0.46670.024151-0.9423-1.0000-0.46670.019552-0.9427-1.0000-0.46670.015853-0.9431-1.0000-0.46670.012854-0.9434-1.0000-0.46670.010455-0.9438-1.0000-0.46670.008456-0.9441-1.0000-0.46670.006857-0.9443-1.0000-0.46670.005558-0.9446-1.0000-0.46670.004559-0.9448-1.0000-0.46670.003660-0.9450-1.0000-0.46670.002961-0.9451-1.0000-0.46670.002462-0.9453-1.0000-0.46670.001963-0.9454-1.0000-0.46670.001664-0.9455-1.0000-0.46670.001365-0.9457-1.0000-0.46670.001066-0.9458-1.0000-0.46670.000867-0.9458-1.0000-0.46670.000768-0.9459-1.0000-0.46670.000569-0.9460-1.0000-0.46670.000470-0.9461-1.0000-0.46670.0004

The final program

import argparse
import csv
import numpy as np


defmain():
    args = parser.parse_args()
    file, learningRate, threshold = args.data, float(
        args.learningRate), float(args.threshold) # save respective command line inputs into variables# read csv file and the last column is the target output and is separated from the input (X) as Ywith open(file) as csvFile:
        reader = csv.reader(csvFile, delimiter=',')
        X = []
        Y = []
        for row in reader:
            X.append([1.0] + row[:-1])
            Y.append([row[-1]])

    # Convert data points into float and initialise weight vector with 0s.
    n = len(X)
    X = np.array(X).astype(float)
    Y = np.array(Y).astype(float)
    W = np.zeros(X.shape[1]).astype(float)
    # this matrix is transposed to match the necessary matrix dimensions for calculating dot product
    W = W.reshape(X.shape[1], 1).round(4)

    # Calculate the predicted output value
    f_x = calculatePredicatedValue(X, W)

    # Calculate the initial SSE
    sse_old = calculateSSE(Y, f_x)

    outputFile = 'solution_' + \
                 'learningRate_' + str(learningRate) + '_threshold_' \
                 + str(threshold) + '.csv''''
        Output file is opened in writing mode and the data is written in the format mentioned in the post. After the
        first values are written, the gradient and updated weights are calculated using the calculateGradient function.
        An iteration variable is maintained to keep track on the number of times the batch linear regression is executed
        before it falls below the threshold value. In the infinite while loop, the predicted output value is calculated 
        again and new SSE value is calculated. If the absolute difference between the older(SSE from previous iteration) 
        and newer(SSE from current iteration) SSE is greater than the threshold value, then above process is repeated.
        The iteration is incremented by 1 and the current SSE is stored into previous SSE. If the absolute difference 
        between the older(SSE from previous iteration) and newer(SSE from current iteration) SSE falls below the 
        threshold value, the loop breaks and the last output values are written to the file.
    '''with open(outputFile, 'w', newline='') as csvFile:
        writer = csv.writer(csvFile, delimiter=',', quoting=csv.QUOTE_NONE, escapechar='')
        writer.writerow([*[0], *["{0:.4f}".format(val) for val in W.T[0]], *["{0:.4f}".format(sse_old)]])

        gradient, W = calculateGradient(W, X, Y, f_x, learningRate)

        iteration = 1whileTrue:
            f_x = calculatePredicatedValue(X, W)
            sse_new = calculateSSE(Y, f_x)

            if abs(sse_new - sse_old) > threshold:
                writer.writerow([*[iteration], *["{0:.4f}".format(val) for val in W.T[0]], *["{0:.4f}".format(sse_new)]])
                gradient, W = calculateGradient(W, X, Y, f_x, learningRate)
                iteration += 1
                sse_old = sse_new
            else:
                break
        writer.writerow([*[iteration], *["{0:.4f}".format(val) for val in W.T[0]], *["{0:.4f}".format(sse_new)]])
    print("Output File Name: " + outputFile


defcalculateGradient(W, X, Y, f_x, learningRate):
    gradient = (Y - f_x) * X
    gradient = np.sum(gradient, axis=0)
    # gradient = np.array([float("{0:.4f}".format(val)) for val in gradient])
    temp = np.array(learningRate * gradient).reshape(W.shape)
    W = W + temp
    return gradient, W


defcalculateSSE(Y, f_x):
    sse = np.sum(np.square(f_x - Y))

    return sse


defcalculatePredicatedValue(X, W):
    f_x = np.dot(X, W)
    return f_x


if __name__ == '__main__':
    parser = argparse.ArgumentParser()
    parser.add_argument("-d", "--data", help="Data File")
    parser.add_argument("-l", "--learningRate", help="Learning Rate")
    parser.add_argument("-t", "--threshold", help="Threshold")
    main()

This article describes the mathematical concepts using a gradient descent method for batch linear regression. Here, considering the loss function (in this case as the sum of squared errors). We do not see a method to minimize the SSE, and this should not happen (to adjust the learning rate), we saw how the convergence of linear regression with the help of the threshold.

The program uses numpy process data may be used without using the basics of python numpy to complete, but it will require nested loops, the time will increase the complexity of O (n * n). In any case, higher memory efficiency numpy arrays and matrices provided. Also, if you prefer to use pandas module, we recommend that you use it, and try to use it to achieve the same procedure.

I hope you enjoyed this article. thanks for reading.

Published 17 original articles · won praise 5 · Views 8735

Guess you like

Origin blog.csdn.net/m0_46510245/article/details/105095992