Artificial Intelligence Introduction Experiment - KNN

1. The purpose of the experiment
.
1) Understand the basic concepts of KNN;
2) Understand how to use MindSpore to conduct KNN experiments.
.2
. Experimental tasks
.
Use MindSpore to conduct KNN experiments on some wine data sets.
Experimental results:
insert image description hereinsert image description here
insert image description here
Experimental principle:
The k-nearest neighbor (k-NN) method is a basic classification and regression method, and is a commonly used method in supervised learning methods. The k-nearest neighbor algorithm assumes that a training data set is given, and the instance categories in it have been determined. When classifying, a new instance is predicted by means of majority voting according to its k nearest neighbor training instance categories. The three elements of the k-nearest neighbor method: the distance measure, the choice of k value and the classification decision rule. Commonly used distance measures are the Euclidean distance and the more general pL distance. When the value of k is small, the k-nearest neighbor model is more complex and prone to overfitting; when the value of k is large, the k-nearest neighbor model is simpler and prone to underfitting. Therefore, the choice of k value will have a significant impact on the classification results. The selection of the k value reflects the trade-off between the approximation error and the estimation error, and the optimal k is usually selected by cross-validation. The classification decision rule is often a majority vote, that is, the class of the input instance is determined by the majority class among the k adjacent input instances of the input instance.
Advantages of KNN algorithm:
Easy to use. Compared with other algorithms, KNN is a relatively simple and clear algorithm. Even without a high mathematical foundation, you can figure out its principle.
The training time of the model is fast. As mentioned above, the KNN algorithm is inert, so I won’t talk too much about it here.
Good predictive effect.
 Insensitive to outliers

import os
# os.environ['DEVICE_ID'] = '4'
import csv
import numpy as np

import mindspore as ms
from mindspore import context
from mindspore import nn
from mindspore.ops import operations as P
from mindspore.ops import functional as F

context.set_context(device_target="CPU")
with open('wine.data') as csv_file:
    data = list(csv.reader(csv_file, delimiter=','))
print(data[56:62]+data[130:133]) # 打印部分数据
X = np.array([[float(x) for x in s[1:]] for s in data[:178]], np.float32)
Y = np.array([s[0] for s in data[:178]], np.int32)
from matplotlib import pyplot as plt
%matplotlib inline
attrs = ['Alcohol', 'Malic acid', 'Ash', 'Alcalinity of ash', 'Magnesium', 'Total phenols',
         'Flavanoids', 'Nonflavanoid phenols', 'Proanthocyanins', 'Color intensity', 'Hue',
         'OD280/OD315 of diluted wines', 'Proline']
plt.figure(figsize=(10, 8))
for i in range(0, 4):
    plt.subplot(2, 2, i+1)
    a1, a2 = 2 * i, 2 * i + 1
    plt.scatter(X[:59, a1], X[:59, a2], label='1')
    plt.scatter(X[59:130, a1], X[59:130, a2], label='2')
    plt.scatter(X[130:, a1], X[130:, a2], label='3')
    plt.xlabel(attrs[a1])
    plt.ylabel(attrs[a2])
    plt.legend()
plt.show()
train_idx = np.random.choice(178, 128, replace=False)
test_idx = np.array(list(set(range(178)) - set(train_idx)))
X_train, Y_train = X[train_idx], Y[train_idx]
X_test, Y_test = X[testclass KnnNet(nn.Cell):
    def __init__(self, k):
        super(KnnNet, self).__init__()
        self.tile = P.Tile()
        self.sum = P.ReduceSum()
        self.topk = P.TopK()
        self.k = k

    def construct(self, x, X_train):
        # Tile input x to match the number of samples in X_train
        x_tile = self.tile(x, (128, 1))
        square_diff = F.square(x_tile - X_train)
        square_dist = self.sum(square_diff, 1)
        dist = F.sqrt(square_dist)
        # -dist mean the bigger the value is, the nearer the samples are
        values, indices = self.topk(-dist, self.k)
        return indices


def knn(knn_net, x, X_train, Y_train):
    x, X_train = ms.Tensor(x), ms.Tensor(X_train)
    indices = knn_net(x, X_train)
    topk_cls = [0]*len(indices.asnumpy())
    for idx in indices.asnumpy():
        topk_cls[Y_train[idx]] += 1
    cls = np.argmax(topk_cls)
return cls_idx], Y[test_idx]
acc = 0
knn_net = KnnNet(5)
for x, y in zip(X_test, Y_test):
    pred = knn(knn_net, x, X_train, Y_train)
    acc += (pred == y)
    print('label: %d, prediction: %s' % (y, pred))
print('Validation accuracy is %f' % (acc/len(Y_test)))

Guess you like

Origin blog.csdn.net/Recursions/article/details/128529030