## 07-- logistic regression

### Introduction

We see here logistic regression term, but note that it is not the same as the previous regression for the sake of a certain value; it is actually a classic binary classification algorithm
machine learning algorithm selection: first complex logistic regression then, can simple or with a simple
decision boundary logistic regression: it may be nonlinear

### Sigmoid function

Task: to shift from input probability; is a classification task

### real python

Case: the establishment of a logistic regression model to predict whether a student is admitted to the University. Suppose you are the administrator of a university department, wants to determine each applicant's chance of admission based on the results of two examinations. Have you ever application of historical human data, you can use it as a training set of logistic regression. For each training example, you have scores of applicants and admissions decisions two exams. To do this, the establishment of a classification model, based on school test scores to estimate the probability of

the code above will be able to draw the following scatter plot

after the graphics With the above, you need to build a model of the
goal: the establishment of a classifier
set threshold the threshold determination admission results
module to be completed:
Sigmoid: is mapped to the function of the probability
model: return prediction result value
cost: calculated loss according to the parameter
gradient: calculates a gradient direction for each parameter
descent: parameter update
accuracy: accuracy

1、按照一个迭代次数，当迭代次数到达某一个数值的时候，就停止
2、当损失值达到一个数量的时候，就停止
3, when the gradient changes when the changes are not large, is stopped
as shown below:

In addition, when we conducted a data analysis, we first need to perform a data shuffle (disrupted data)

as described above, since the intermediate numpy with a shuffle operation method is used for shuffling of operation are as follows:

has the above sequence data were disrupted
when performing a gradient descent, more usually we choose the sample, and the learning rate adjustment is small so that such a case would be the final result of convergence, but also increase the modeling time; there is also a prior embodiment, performing a gradient descent, a first data pre-processing, after this treatment re-modeling, convergence is quicker

### Guess you like

Origin blog.csdn.net/Escid/article/details/90726731
Recommended
Ranking
Daily