K-nearest neighbor to classify cases

Loading data

The data set using Iris

#从sklearn.datasets导人iris数据加载器。租
from sklearn.datasets import load_iris
#使用加载器读取数据并且存人变量iris.
iris=load_iris ()
#查验数据规模。
iris.data.shape
(150, 4)
#查看数据说明。对于一名机器学习的实践者来讲,这是一个好习惯。
print(iris.DESCR)
Iris Plants Database
====================

Notes
-----
Data Set Characteristics:
    :Number of Instances: 150 (50 in each of three classes)
    :Number of Attributes: 4 numeric, predictive attributes and the class
    :Attribute Information:
        - sepal length in cm
        - sepal width in cm
        - petal length in cm
        - petal width in cm
        - class:
                - Iris-Setosa
                - Iris-Versicolour
                - Iris-Virginica
    :Summary Statistics:

    ============== ==== ==== ======= ===== ====================
                    Min  Max   Mean    SD   Class Correlation
    ============== ==== ==== ======= ===== ====================
    sepal length:   4.3  7.9   5.84   0.83    0.7826
    sepal width:    2.0  4.4   3.05   0.43   -0.4194
    petal length:   1.0  6.9   3.76   1.76    0.9490  (high!)
    petal width:    0.1  2.5   1.20  0.76     0.9565  (high!)
    ============== ==== ==== ======= ===== ====================

    :Missing Attribute Values: None
    :Class Distribution: 33.3% for each of 3 classes.
    :Creator: R.A. Fisher
    :Donor: Michael Marshall (MARSHALL%[email protected])
    :Date: July, 1988

This is a copy of UCI ML iris datasets.
http://archive.ics.uci.edu/ml/datasets/Iris

The famous Iris database, first used by Sir R.A Fisher

This is perhaps the best known database to be found in the
pattern recognition literature.  Fisher's paper is a classic in the field and
is referenced frequently to this day.  (See Duda & Hart, for example.)  The
data set contains 3 classes of 50 instances each, where each class refers to a
type of iris plant.  One class is linearly separable from the other 2; the
latter are NOT linearly separable from each other.

References
----------
   - Fisher,R.A. "The use of multiple measurements in taxonomic problems"
     Annual Eugenics, 7, Part II, 179-188 (1936); also in "Contributions to
     Mathematical Statistics" (John Wiley, NY, 1950).
   - Duda,R.O., & Hart,P.E. (1973) Pattern Classification and Scene Analysis.
     (Q327.D83) John Wiley & Sons.  ISBN 0-471-22361-1.  See page 218.
   - Dasarathy, B.V. (1980) "Nosing Around the Neighborhood: A New System
     Structure and Classification Rule for Recognition in Partially Exposed
     Environments".  IEEE Transactions on Pattern Analysis and Machine
     Intelligence, Vol. PAMI-2, No. 1, 67-71.
   - Gates, G.W. (1972) "The Reduced Nearest Neighbor Rule".  IEEE Transactions
     on Information Theory, May 1972, 431-433.
   - See also: 1988 MLC Proceedings, 54-64.  Cheeseman et al"s AUTOCLASS II
     conceptual clustering system finds 3 classes in the data.
   - Many, many more ...

By the above-described identification code data and the data itself is described, a total of learned data sets Iris Iris data samples 150, and uniformly distributed in 3 different subspecies; each data sample is four different shapes of petals, calyx the features described.

data processing

Without specifying the test set, so conventionally, data needs to be divided randomly, 25% of the samples for testing, the remaining 75% of the samples used for training the model.

#从sklearn.cross validation里选择导人train test_ split 用于数据分割。
from sklearn.cross_validation import train_test_split
#从使用train test_ split,利用随机种子random state采样258的数据作为测试集。
x_train, x_test, y_train, y_test = train_test_split(iris.data, iris.target,
test_size=0.25, random_state = 33)
C:\ProgramData\Anaconda3\lib\site-packages\sklearn\cross_validation.py:41: DeprecationWarning: This module was deprecated in version 0.18 in favor of the model_selection module into which all the refactored classes and functions are moved. Also note that the interface of the new CV iterators are different from that of this module. This module will be removed in 0.20.
  "This module will be removed in 0.20.", DeprecationWarning)

Construction of model

K-nearest neighbor classifier using iris (Iris) data class prediction

#从sklearn.preprocessing里选择导人数据标准化模块。
from sklearn.preprocessing import StandardScaler
#从sklearn. neighbors里选择导人KNeighborsClassifier,即K近邻分类器。
from sklearn.neighbors import KNeighborsClassifier
#对训练和测试的特征数据进行标准化。
ss = StandardScaler ()
x_train = ss.fit_transform(x_train)
x_test = ss.transform(x_test)

Model Assessment

Use accuracy, recall, precision and F1 indicators, four measures of a K-model performance evaluation on the classic varieties of iris prediction task.

#使用K近邻分类器对测试数据进行类别预测,预测结果储存在变量y_predict中。
knc = KNeighborsClassifier ()
knc.fit(x_train, y_train)
y_predict = knc.predict(x_test)
#使用K近邻分类器对测试数据进行类别预测,预测结果储存在变量y_predict中。
knc = KNeighborsClassifier ()
knc.fit(x_train, y_train)
y_predict = knc.predict (x_test)
#使用模型自带的评估函数进行准确性测评。
print('The accuracy of K- Nearest Neighbor Classifier is', knc.score(x_test,y_test))
The accuracy of K- Nearest Neighbor Classifier is 0.8947368421052632
#依然使用sklearn .metrics里面的classification report 模块对预测结果做更加详细的分析。
from sklearn .metrics import classification_report
print(classification_report(y_test, y_predict, target_names = iris.target_names))

             precision    recall  f1-score   support

     setosa       1.00      1.00      1.00         8
 versicolor       0.73      1.00      0.85        11
  virginica       1.00      0.79      0.88        19

avg / total       0.92      0.89      0.90        38

K-nearest neighbor classifier accuracy of the classification of the test sample 38 is about 89.474% iris, average precision, recall and indicators F1 and 0.90 respectively 0.92.0.89.

K-nearest neighbor belonging to non-parametric model is very simple in. However, it is this decision algorithms, which led to a very high computational complexity and memory consumption. Because the model for each process a test sample are required for all pre-loaded in the memory of the training sample is traversed one by calculating the similarity, sort and select .K nearest neighbors training samples labeled, and then make classification decisions. This is the algorithm complexity level of the square, once the data is slightly larger scale, the user will need to weigh the cost of more computation time.

Micro-channel public number: 220 Handan Road chant Bin hospital for more content

Published 58 original articles · won praise 77 · views 90000 +

Guess you like

Origin blog.csdn.net/weixin_41503009/article/details/104316788
Recommended