大数据-KNN算法

KNN是通过测量不同特征值之间的距离进行分类。它的思路是:如果一个样本在特征空间中的k个最相似(即特征空间中最邻近)的样本中的大多数属于某一个类别,则该样本也属于这个类别,其中K通常是不大于20的整数。KNN算法中,所选择的邻居都是已经正确分类的对象。该方法在定类决策上只依据最邻近的一个或者几个样本的类别来决定待分样本所属的类别。

其算法的描述为:

1)计算测试数据与各个训练数据之间的距离;

2)按照距离的递增关系进行排序;

3)选取距离最小的K个点;

4)确定前K个点所在类别的出现频率;

5)返回前K个点中出现频率最高的类别作为测试数据的预测分类。

应用1:自己编写knn算法如下:

import pandas as pd
import numpy as np

'''
约会样本数据说明datingTestSet.txt, 以tab键分开:
1、FlyMiles: 每年获得的飞行常客里程数;
2、PlayTime: 玩视频游戏所耗费的时间百分比
3、IceCream: 每周消费的冰淇淋公斤数
4、对约会对象的感觉
'''
#url表示数据文件存放的地址
url = 'Data/datingTestSet.txt'

data = pd.read_table(url, sep='\t', header=None, names=['FlyMiles', 'PlayTime', 'IceCream', 'LikeDegree'])
#将LikeDegree一列由字符串转化为数字
like_mapping = {label: idx for idx, label in enumerate(np.unique(data['LikeDegree']))}
data['LikeDegree'] = data['LikeDegree'].map(like_mapping)

#归一化
def autoNorm(data):
    normal = (data - data.min()) / (data.max() - data.min())
    scope = data.max() - data.min()
    min = data.min()
    return normal, scope, min

# knn算法
def knn(inX, normal, label, k):
    data_sub = normal - inX
    data_square = data_sub.applymap(np.square)
    data_sum = data_square.sum(axis=1)
    data_sqrt = data_sum.map(np.sqrt)
    dis_sort = data_sqrt.argsort()
    #加上测试数numTest
    k_label = label[dis_sort[:k] + 200]
    label_sort = k_label.value_counts()
    res_label = label_sort.index[0]
    return res_label

#测试结果
def datingTest():
    normal, scope, min = autoNorm(data[['FlyMiles', 'PlayTime', 'IceCream']])
    label = data.iloc[:, -1]
    m = normal.shape[0]
    numTest = int(m * 0.2)
    errorCount = 0.0
    for i in range(numTest):
        result = knn(normal.iloc[i, :], normal.iloc[numTest : m, :], label[numTest : m], 3)
        print("the classifier came back with: %d, the real answer is: %d" % (result, label[i]))
        if(result != label[i]):
            errorCount += 1.0
    print("the total error rate is: %f" % (errorCount / float(numTest)))

datingTest()

应用2:利用sklearn库函数

import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier

data = pd.read_table('Data/datingTestSet.txt', sep='\t', header=None, names=['FlyMiles', 'PlayTime', 'IceCream', 'LikeDegree'])
like_mapping = {label: idx for idx, label in enumerate(np.unique(data['LikeDegree']))}
data['LikeDegree'] = data['LikeDegree'].map(like_mapping)
X = data.iloc[:, 0:3]
y = data.iloc[:, -1]

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=32)
knn = KNeighborsClassifier(n_neighbors=5)
knn.fit(X_train, y_train)
right_rate = knn.score(X_test, y_test)
print('the right rate is: %f' % right_rate)

猜你喜欢

转载自www.cnblogs.com/fredkeke/p/9098372.html