nearest neighbor classifier
passive learning
General classifiers, such as decision trees and support vector machines, start to learn a mapping model from input attributes to class labels whenever training data is available. Such learning strategies are called active learning methods. In contrast, passive learning algorithms have a strategy of deferring the modeling of the training data until it is time to classify the test examples. An example of passive learning is the Rote classifier, which remembers the entire training set and only classifies when a test example exactly matches a certain training example. The obvious flaw of this classification algorithm is that it often happens that the test examples cannot be classified because none of the training examples match them.
nearest neighbor classifier
A slight improvement to the Rote classifier to make it more flexible is to find all training examples that are close to the properties of the test example. These training examples are called nearest neighbors and can be used to determine the test examples. class label. This is the same principle as "like things gather, people are divided into groups". The nearest neighbor classifier treats each training example as
Obviously, here
to reduce
Pros and cons of nearest neighbor classifiers
Advantages of Nearest Neighbor Classifiers
- There is no need to build a model for the training set.
- Nearest neighbor classifiers can generate decision boundaries of any shape.
Disadvantages of nearest neighbor classifiers
- susceptible to noise.
- Often the training set needs to be preprocessed before it can be used.
- Each classification takes a long time.