OpenCV for Java KNN (KNearest-K nearest neighbor algorithm) example code



OpenCV + Java KNN ( KNearest-K nearest neighbor algorithm) example code

This article gives an example of the OpenCV machine learning K -nearest neighbor algorithm implemented in the Java language.

OpenCV version: 3.4.0 . There are no tutorials or examples about Java language ML in the official documents . There are only code examples of C++ and Python . I found an example on the Internet, passed the actual test, and attached Chinese notes.

Original address: https://stackoverflow.com/questions/32612058/java-opencv-find-k-nearest-neighbor-c-to-java-conversion

Sample code:

import java.text.DecimalFormat;

import java.util.*;

 

import org.opencv.core.*;

import org.opencv.imgcodecs.Imgcodecs;

import org.opencv.ml.KNearest;

import org.opencv.ml.Ml;

import org.opencv.utils.Converters;

 

public class Main {

     public static void main(String[] args) {

        // TODO Auto-generated method stub

        // load the OpenCV native library

        System.loadLibrary(Core.NATIVE_LIBRARY_NAME);

 

        // The path of the test data set after the OpenCV installation package is decompressed: samples/data/digits.png

        // Use the digital image dataset attached to the OpenCV installation package. The dataset has a total of 5000 digital samples / samples from 0 to 9 , and the image resolution is 2000x1000 .

        // The size of each digital sample is 20x20 , so there are 2000/20=100 digital samples in each line, each number from 0 to 9 occupies 5 lines in turn, and 10 numbers have a total of 50 lines.

        // Pay attention to the image path, you can use an absolute path

        Mat digits = Imgcodecs.imread("E:\\...\\images\\digits.png", 0);

       

        // setup train/test data:

        Mat trainData = new Mat();

        Mat testData = new Mat();

        List<Integer> trainLabs = new ArrayList<Integer>();

        List<Integer> testLabs = new ArrayList<Integer>();

 

        // 10 digits a 5 rows:

        // 一次行读入所有数据,训练数据和测试数据各一半

        // 数据集共 50

        for (int r=0; r<50; r++) {

            // 100 digits per row:

            // 每行 100 个数字样例

            for (int c=0; c<100; c++) {

            // crop out 1 digit:

            // 每次读入一个数字样本,大小:20x20

            Mat num = digits.submat(new Rect(c*20,r*20,20,20));

            // we need float data for knn:

            // knn算法的输入为浮点型,在此转换

            num.convertTo(num, CvType.CV_32F);

            // 50/50 train/test split:

            if (c % 2 == 0) {

                 // 偶数行作为训练样本

                 // for opencv ml, each feature has to be a single row:

                 // OpenCV中的ml算法要求输入训练数据的每一个样本(此例中可理解为样本特征:feature)占据一行

                 // num 本身为 20x20 的矩阵,转换为 1 n列的矩阵,其中 n=20x20=400

                 trainData.push_back(num.reshape(1,1));

                 // add a label for that feature (the digit number):

                 // 对应每个训练样本建立对应的Label(正确答案)

                 trainLabs.add(r/5);

            } else {

                 // 奇数行作为测试数据

                 testData.push_back(num.reshape(1,1));

                 testLabs.add(r/5);

            }

            }               

        }

 

        // make a Mat of the train labels, and train knn:

        // 构建 KNearest 对象,3.0版本之前似乎可以直接 new 一个

        KNearest knn = KNearest.create();

        // train函数原型定义为:public  boolean train(Mat samples, int layout, Mat responses)

        // 其中第一个参数自不必说,是输入的训练数据,其中每一行代表一个样本(特征);第二个参数是指定样本的布局,是每行一个样本,

        // 还是每列一个样本,没有默认值;第三个参数为训练样本对应的正确答案(label

        // 扩展:Converters类中有OpenCV提供的许多数据(类型)转换工具,非常实用

        knn.train(trainData, Ml.ROW_SAMPLE, Converters.vector_int_to_Mat(trainLabs));

       

        // 使用测试数据集进行测试

        // 测试数据也是每行一个样本

        // now test predictions:

        int err = 0;

        for (int i=0; i<testData.rows(); i++)

        {

            // 读取一行(一个测试样本)

            Mat one_feature = testData.row(i);

            // 预期的(标准/正确)答案

            int testLabel = testLabs.get(i);

 

            Mat res = new Mat();

            // 查找匹配:第一个参数为输入样本(可一次输入多个样本),第二个参数为需要返回的K个邻近(即KNearest的那个K),第三个参数为返回

            // 结果(res: result),结果为样本对应的Label值,每一个样本的匹配结果对应一行。

            // 如果输入仅有一个样本,则返回结果(p)就是预测结果。参数 1即为K-近邻算法的关键参数 K!

            float p = knn.findNearest(one_feature, 1, res);

            System.out.println(testLabel + " " + p + " " + res.dump());

           

            // 统计识别误差

            int iRes = (int) p;

            if(iRes != testLabel) {

                err++;

            }

        }

 

        // 输出(屏幕提示)识别精度

        float accuracy = (float) ((2500 - (float)err) / 2500.0);

        DecimalFormat df = new DecimalFormat("0.0000");

        System.out.println("error count: " + err + ", accuracy is: " + df.format(accuracy));

 

        // 以下为源作者一些说明,大意是如果在实际应用中需要借助OpenCV自带的数字样本集训练的模型识别数字,需要做的一些工作

        // 在此不作具体解释,有疑问的可以留言

        hmm, the 'real world' test case probably looks more like this:

        make sure, you follow the very same pre-processing steps used in the train phase:

        //  Mat one_feature = Imgcodecs.imread("one_digit.png", 0);

        //  Mat feature;

        //  one_feature.convertTo(feature, CvTypes.CV_32F);

        //  Imgproc.resize(feature, feature, Size(20,20));

        //  int predicted = knn.findNearest(feature.reshape(1,1), 1);

 

    }

 

}

 

代码在本人主机上的输出结果为: error count: 154, accuracy is: 0.9384

可见识别正确率仅有约 93.8%

 

Guess you like

Origin blog.csdn.net/wangyulj/article/details/79030121