Android Camera dynamic face recognition + face detection based on OpenCV (no need for OpenCVManager)

       Recently, because of the need to implement face recognition, after implementing this function, I wanted to summarize the pits I stepped on. Referring to many blogs, I found that there are mainly two forms, one is to implement face detection based on the android SDK, and the other is to use openCvManager to implement. Both of these methods realize dynamic face detection. Of course, there are many articles on the Internet about detecting faces in static pictures. I won't go into details here. The function I implemented here is mainly after starting the Camera, when a face appears in the camera, the Camera interface dynamically draws a rectangular frame of the face, after obtaining the face frame, the Camera takes a picture, obtains the picture, and transmits the picture to the background. Verify face. Here we need to distinguish between face detection and face recognition. Face detection is to detect whether there is a face in the camera. If there is, a rectangular frame will be drawn to mark the face, while face recognition is to verify the concept of whether it is a person based on two faces.
        Let's talk about the first way to realize dynamic face detection. This is based on Google's own FaceDetectionListener for face detection. When Camera captures a face, it will draw a face rectangle. For details, you can refer to this great god's blog, http://blog.csdn.net/yanzi1225627/article/details/38098729/
He has already explained it in great detail. But I feel that Google's own detection algorithm is slow to detect faces during testing, and some models do not support this face detection interface, but his article is still very valuable. Then I will introduce the specific method of using openCv to realize dynamic face detection here.

       First provide the download address of opencv http://opencv.org/I downloaded the latest version 3.2, originally used 2.4, but will find that it is relatively stuck. The company has always used eclipse, and the projects in the demo are also eclipse projects, so eclipse is used directly for debugging. Of course, there are also many tutorials used in As on the Internet, you can look for more.
After we download the sdk and decompress it, we will see the following structure when we enter the samples directory:

       After we run example-face-detection.apk directly, we will find a prompt to install opencv-manager. In the sdk/apk directory, we will find different types of opencvmanager. We choose the corresponding manager to install according to different models. I choose The package is OpenCV_3.2.0_Manager_3.20_armeabi-v7a.apk. After installation, we can find the camera for dynamic face detection after opening the application.
After running the official apk, we started to debug it ourselves. We import the face-detection project of OpenCV-android-sdk/samples, and then create a new project as Library, and import the relevant classes and resource files in OpenCV-android-sdk/java in our library,

       Then associate face-detection, at this time my project still reported an error, and found the following error:

       No ndk-build.cmd was found. It turns out that the ndk-build path is not configured, and NDKROOT can be configured. Here is the right-click properties-C/C++build, as shown below:

       The ndk I downloaded here is android-ndk-r9d. After the download is complete, we can decompress it directly. The NDKROOT here is the path of our decompressed ndk. The download path of Ndk is attached:
http://dl.google.com/android/ndk/android-ndk-r9d-windows-x86.zip
       and then encountered the following error:

       This is because the root path of Opencv.mk of opencv is not configured in jni/Android.mk. Similarly, in the same way as ndk configuration, we can configure our own ${OPENCV_ANDROID_SDK}.

       So far, we have run the official demo, which can be run directly, but we find that we always need to install opencvmanager, which is obviously not suitable sometimes. So next I will introduce the method that does not need to install opencvmanager, it is very simple, divided into the following 4 steps:
1. Modify the first few lines of Android.mk to the following forms,

include $(CLEAR_VARS)
OPENCV_CAMERA_MODULES:=on
OPENCV_INSTALL_MODULES:=on
OPENCV_LIB_TYPE:=SHARED

2. Open the FdActivity.java file and add a static initialization block code to it,

 static {
        Log.i(TAG, "OpenCV library load!");
        if (!OpenCVLoader.initDebug()) {
            Log.i(TAG, "OpenCV load not successfully");
        } else {
           System.loadLibrary("detection_based_tracker");// load other libraries
        }
    }

It is used to load the OpenCV_java library. If no error similar to native method not found is added,
3. Comment out the initAsync sentence in onResume so that the program does not access the OpenCV Manager.

@Override
public void onResume()
    {
        super.onResume();
        if (!OpenCVLoader.initDebug()) {
            Log.d(TAG, "Internal OpenCV library not found. Using OpenCV Manager for initialization");
       //     OpenCVLoader.initAsync(OpenCVLoader.OPENCV_VERSION_3_0_0, this, mLoaderCallback);
        } else {
            Log.d(TAG, "OpenCV library found inside package. Using it!");
            mLoaderCallback.onManagerConnected(LoaderCallbackInterface.SUCCESS);
        }
    }

4. Modify the OnCreate() method of FdActivity.java, copy the try-catch block from the above private BaseLoaderCallback mLoaderCallback = new BaseLoaderCallback(this) code block and place it after the setContentView() of OnCreate.

      try {
            // load cascade file from application resources
            InputStream is = getResources().openRawResource(R.raw.lbpcascade_frontalface);
            File cascadeDir = getDir("cascade", Context.MODE_PRIVATE);
            mCascadeFile = new File(cascadeDir, "lbpcascade_frontalface.xml");
            FileOutputStream os = new FileOutputStream(mCascadeFile);

            byte[] buffer = new byte[4096];
            int bytesRead;
            while ((bytesRead = is.read(buffer)) != -1) {
                os.write(buffer, 0, bytesRead);
            }
            is.close();
            os.close();

            mJavaDetector = new CascadeClassifier(mCascadeFile.getAbsolutePath());
            if (mJavaDetector.empty()) {
                Log.e(TAG, "Failed to load cascade classifier");
                mJavaDetector = null;
            } else
                Log.i(TAG, "Loaded cascade classifier from " + mCascadeFile.getAbsolutePath());

            mNativeDetector = new DetectionBasedTracker(mCascadeFile.getAbsolutePath(), 0);

            cascadeDir.delete();

        } catch (IOException e) {
            e.printStackTrace();
            Log.e(TAG, "Failed to load cascade. Exception thrown: " + e);
        }

After running it, you can find that the program can run directly without relying on OpenCvManager.
The above-mentioned process here is mainly to dynamically monitor the face of the camera. In fact, this has basically been completed. Now we integrate it into our own project and use Baidu's face recognition to judge whether the faces are consistent, so as to realize face recognition.
First of all, we can create our own face recognition application
on Baidu AI Open Platform http://ai.baidu.com/
! [Write picture description here]( https://img-blog.csdn.net/20170712160653919 ? /2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvcXFfMjg5MzE2MjM=/font/5a6L5L2T/fontsize/400/fill/I0JBQkFCMA==/dissolve/70/gravity/SouthEast)

After creation, our application will have an APPiD and ApI key, we use these two property values We can refer to
https://cloud.baidu.com/doc/FACE/Face-API.html for the specific calling method that can generate access_token. After generating access_token, when sending a post request, you can attach this token, and we can make face recognition. What I am calling here is the face registration and face authentication interface. Through the score value returned by the authentication, we can judge whether it is the same person. For specific request parameters and return parameters, you can refer to Baidu face recognition API.
好了,现在开始创建我们的应用。在创建完工程后,我们把face-detection中的FdActivity和DetectionBasedTracker导入到我们自己的工程中,这之中的FdActivity相当于相机界面,这里注意不要改动包名,否则可能会出现找不到本地方法的错误。同时,我们要将jni和Lib armead目录复制到我们的目录里,之后我们要利用Eclipse自动编译NDK/JNI。下面简单介绍一下自动编译的方法:

1.将Android项目转换为C/C++项目,如下图,New -> Other -> C/C++ -> Convert to a C/C++ Project.

2. 配置NDK编译路径,Project->Properties,如下图,其中Build-Command中ANDROID_NDK为环境变量中配置的Android-NDK路径;Build-Directory为当前工程目录

3.Project->Properties,CNU C和GNU C++中配置OpenCV的链接库


配置完指后,我们需要稍微修改一下我们自己创建的opencvLibrary的内容。因为camera中出现人脸框后需要拍照功能,我们需要简单的修改OpenCvLibrary中的JavaCameraView,在这个类里面我们可以调用camera的takepicture方法。同时通过PictureCallback将我们拍的照片保存到我们指定的路径。

mCamera.takePicture(mShutterCallback, null, mJpegPictureCallback);
ShutterCallback mShutterCallback = new ShutterCallback() 
    {
        public void onShutter() {
        }
    };

    PictureCallback mJpegPictureCallback = new PictureCallback() 
    {
        public void onPictureTaken(byte[] data, Camera camera) {
            Log.d("hr", "拍照回调");
            Bitmap b = null;
            if (null != data) {
                b = BitmapFactory.decodeByteArray(data, 0, data.length);
                mCamera.stopPreview();
            }
            if (null != b) {
                Bitmap rotaBitmap = ImageUtil.getRotateBitmap(b, 00.0f);
                boolean bitmap = FileUtil.saveBitmap(rotaBitmap,idTag);
                if (bitmap) {
                    if (handler!=null) {
                        handler.sendEmptyMessage(2000);
                    }       
                }
            }
            mCamera.startPreview();
        }};

在FdActivity的 onCameraFramed中 facesArray 得长度 即是人脸显示的数目

Rect[] facesArray = faces.toArray();
        for (int i = 0; i < facesArray.length; i++)
            Imgproc.rectangle(mRgba, facesArray[i].tl(), facesArray[i].br(), FACE_RECT_COLOR, 3);

当相机捕捉到人脸后,我们可以通过handle自动拍照也可以手动拍照,这就看具体的需求了。保存图片之后,我们调用百度的接口,根据返回值来判断人体的相似度。

            url = new URL(urlPath);
            Bitmap bmp = FileUtil.getValidateBitmap();
            ByteArrayOutputStream baos = new ByteArrayOutputStream();
            bmp.compress(Bitmap.CompressFormat.JPEG, 100, baos);
            try {
                baos.close();
            } catch (IOException e) {
                // TODO Auto-generated catch block
                e.printStackTrace();
            }
            byte[] buffer = baos.toByteArray();
            // 将图片的字节流数据加密成base64字符输出
            String photo = Base64.encodeToString(buffer, 0, buffer.length,Base64.DEFAULT);
            HashMap<String, String> map=new HashMap<String, String>();
            map.put("uid", uid);
            map.put("group_id", groupid);
            map.put("image", photo);
            map.put("ext_fields", "faceliveness");
            String str=HttpClientUtil.doPost(urlPath, map);

获取验证和注册的方式代码基本上一样,就是发送普通的网络请求,我们根据返回值可以判断出人脸是否一致。至此整个人脸验证流程就算结束了。我这里就光展示一下动态检测人脸的效果
这里写图片描述
当然还有认证的流程,因为感觉难点主要在动态识别人脸这块,所以调用百度的接口,实现人脸识别的过程就不再详细介绍了。当然 ,还有一些小问题,当竖屏时,检测不准确或者Camera界面没有全屏, 或者是前置摄像头的开启。大家有兴趣的话可以去研究一下。
最后附带上我的demo地址,http://download.csdn.net/download/qq_28931623/10167057

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325170318&siteId=291194637