机器学习-K近邻算法

用例一:

from sklearn.neighbors import NearestNeighbors
import numpy as np

X = np.array([[-1, -1], [-2, -1], [-3, -2], [1, 1], [2, 1], [3, 2]])
nbrs = NearestNeighbors(n_neighbors=2, algorithm='ball_tree').fit(X)

#邻居数为2,计算x中各节点最近两个邻居距离和下标
distances, indices = nbrs.kneighbors(X)

print distances
print indices
#是最近的距离的节点
print nbrs.kneighbors_graph(X).toarray()

用例二:

from sklearn.neighbors import KNeighborsClassifier

X = [[0], [1], [2], [3]]
Y = [0, 0, 1, 1]
neigh = KNeighborsClassifier(n_neighbors = 3)
#邻居数为3,使用X和Y的值训练分类器,x为输入值,y为划分的目标取值
neigh.fit(X, Y)
#输入值为1.1 预测划分的目标为0/1
print (neigh.predict([[1.1]]))
#输入值为0.9 预测取值的概率
print (neigh.predict_proba([[0.9]]))

具体参考:《web安全之机器学习入门》
https://github.com/duoergun0729/1book/

猜你喜欢

转载自blog.csdn.net/marywang56/article/details/79622752