Classifier for Machine Learning - Summary of the Use of Various Classifiers in Matlab (Random Forest, Support Vector Machine, K-Nearest Neighbor Classifier, Naive Bayes, etc.)

      Commonly used classifiers in Matlab include random forest classifier, support vector machine (SVM), K-nearest neighbor classifier, naive Bayes, ensemble learning method and discriminant analysis classifier, etc. The related Matlab functions of each classifier are used as follows:

First, a unified description of some variables used in the following introduction:

    train_data - training samples, each row of data in the matrix constitutes a sample, and each column represents a feature

    train_label - training sample label, as a column vector

    test_data - test sample, each row of data in the matrix constitutes a sample, and each column represents a feature

    test_label - test sample label, as a column vector

①Random Forest classifier (Random Forest)

    TB=TreeBagger(nTree,train_data,train_label);

    predict_label=predict(TB,test_data);

②Support Vector Machine (SVM)

    SVMmodel=svmtrain(train_data,train_label);

    predict_label=svmclassify(SVMmodel,test_data);

③K-nearest neighbor classifier (KNN)

    KNNmodel=ClassificationKNN.fit(train_data,train_label,'NumNeighbors',1);

    predict_label=predict(KNNmodel,test_data);

④ Naive Bayes

    Bayesmodel=NaiveBayes.fit(train_data,train_label);

    predict_label=predict(Bayesmodel,test_data);

⑤ Ensembles for Boosting

    Bmodel=fitensemble(train_data,train_label,'AdaBoostM1',100,'tree','type','classification');

    predict_label=predict(Bmodel,test_data);

⑥Discriminant Analysis Classifier

    DACmodel=ClassificationDiscriminant.fit(train_data,train_label);

    predict_label=predict(DACmodel,test_data);

The specific use is as follows: (The download address of the exercise data is as follows: http://en.wikipedia.org/wiki/Iris_flower_data_set, briefly introduce the data set: a batch of flowers can be divided into 3 varieties, the calyx length, calyx of different varieties of flowers There will be differences in width, petal length, and petal width. According to these characteristics, variety classification can be achieved)

%% Random Forest classifier (Random Forest)
nTree=10;
B=TreeBagger(nTree,train_data,train_label,'Method', 'classification');
predictl=predict(B,test_data);
predict_label=str2num(cell2mat(predictl));
Forest_accuracy=length(find(predict_label == test_label))/length(test_label)*100;

%% Support Vector Machines
% SVMStruct = svmtrain(train_data, train_label);
% predictl=svmclassify(SVMStruct,test_data);
% predict_label=str2num(cell2mat(predictl));
% SVM_accuracy=length(find(predict_label == test_label))/length(test_label)*100;  


%% K nearest neighbor classifier (KNN)
% mdl = ClassificationKNN.fit(train_data,train_label,'NumNeighbors',1);
% predict_label=predict(mdl, test_data);
% KNN_accuracy=length(find(predict_label == test_label))/length(test_label)*100


%% Naive Bayes
% nb = NaiveBayes.fit(train_data, train_label);
% predict_label=predict(nb, test_data);
% Bayes_accuracy=length(find(predict_label == test_label))/length(test_label)*100;


%% Ensemble learning methods (Ensembles for Boosting, Bagging, or Random Subspace)
% ens = fitensemble(train_data,train_label,'AdaBoostM1' ,100,'tree','type','classification');
% predictl=predict(ens,test_data);
% predict_label=str2num(cell2mat(predictl));
% EB_accuracy=length(find(predict_label == test_label))/length(test_label)*100;

%% discriminant analysis classifier
% obj = ClassificationDiscriminant.fit(train_data, train_label);
% predictl=predict(obj,test_data);
% predict_label=str2num(cell2mat(predictl));
% DAC_accuracy=length(find(predict_label == test_label))/length(test_label)*100;

%% practise
% meas=[0 0;2 0;2 2;0 2;4 4;6 4;6 6;4 6];
% [N n]=size(meas);
% species={'1';'1';'1';'1';'-1';'-1';'-1';'-1'};
% ObjBayes=NaiveBayes.fit(meas,species);
% x=[3 3;5 5];
% result=ObjBayes.predict(x);

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324650382&siteId=291194637