Use face recognition SDK witnesses than to realize the whole process

Witnesses than to be seen everywhere in today's society, such as high-speed rail, plane, hotel occupancy, and even the entrance area can see a variety of witnesses applications, face recognition SDK are also springing up in general after another, such as Baidu, Shang Dynasty, Face ++, ArcSoft and so on. After various attempts to use the SDK, I am most favored to the number of ArcSoft's SDK, the most direct reason is ArcSoft permanent commitment free. I start from the 2.0 version in use, the actual effect is indeed good, they received a message last month ArcFace3.0 updated, as a white prostitute party will naturally not miss this update, after a 3.0 to get started, find ArcFace 3.0 has the following new features .

  • Feature matching support than for model selection, there are photos of life than on the model and witnesses than the model
  • Android platform, the new 64-bit SDK
  • A new image data into the way

This article will use ArcFace3.0 scene of witnesses are described according to the following

  • ArcFace 3.0 SDK interfaces changes in gains and losses
  • How to witnesses 2.0 Demo program instead of the program equipped ArcFace 3.0 SDK
  • How to directly modify human ArcFace 3.0 of demo certificate program

A, ArcFace 3.0 SDK interfaces changes in gains and losses

Advantages of the interface changes:

1. The high degree of freedom in service

  Witnesses to 2.0, for example, we can only incoming data, the results came out, and some intermediate products, such as facial characteristics to special data acquisition unlikely. Now after use ArcFace 3.0, elimination of fixing process, the detection, comparison, extraction process can be controlled by themselves.

2. can be achieved than with living according to witnesses than in the same project

  The presence of witnesses and ArcFace SDK SDK conflict can not be used at the same time, if we both want to permit employers want to use photos of life, it is necessary to write two engineering and process engineering are also two somewhat different. Now select models only need to switch interfaces within the model can be achieved, can achieve integrated witnesses and living according to the program within a project.

3. Code reuse

  ArcFace 3.0 of witnesses and identity differences only compare the model selection interface, the other exactly the same, so most of the code can be reused, greatly improving the efficiency of development.

Interface changes disadvantages:

1. Interface Changes

  Gains will lose everything, because the ArcFace 3.0 on witnesses not part of the package, so that all interfaces need to be changed during the upgrade process, I believe the problem is that all programmers are not willing to see.

2. becomes difficult to achieve

  Also due ArcFace 3.0 is not part of the package on witnesses, so that some of the process that comes with the original interface needs its own callback to achieve, it just started for people who are not very friendly.

summary:

  Although it says the shortcomings of some of the ArcFace 3.0, but I am still very much in favor of this upgrade, after all, always bring some innovation impact of each product, but these shocks is relative, I believe that a unified interface, the identification process the applicability of freedom and business programs have increased, I believe it is time for witnesses 2.0 "ton output capacity" initiative in the long run is worth it.

Second, the witnesses integrated ArcFace 3.0 SDK 2.0 Demo

  In the above we saw due to changes in the interface, resulting in witnesses all 2.0 program interface to be modified, then I will be witnesses 2.0 Demo for example, explain how I use ArcFace 3.0 SDK to upgrade.

1, witnesses 2.0 Demo Project Configuration

  Considering the possibility that some users are not familiar with witnesses 2.0 Demo, briefly explain how the official Demo configurations.

Use face recognition SDK witnesses than to realize the whole process
  First, as shown in FIG first witnesses engine into the Demo, and subsequent modifications APP_ID SDK_KEY, APP_ID with SDK_KEY engine and witnesses in the official website by Constants open platform for obtaining. SDCard and then in placing a device named "sample.jpg" picture as witnesses analog input image (image path may be modified within SAMPLE_FACE variables under MainActivity), after the graph has been configured to run screenshot.

Use face recognition SDK witnesses than to realize the whole process

2.ArcFace 3.0 SDK replaced

  首先我们要先获取ArcFace3.0的SDK,同样可以在开放平台上进行获取。用新的SDK库替换掉原本的SDK,替换后的项目目录如下图所示

Use face recognition SDK witnesses than to realize the whole process

3.ArcFace3.0接口替换

  上面提到了,由于3.0的全面变更,所有的接口全部都发生了改变,因此我们要把原本2.0的接口全部替换为3.0。

 3.1 引擎激活:

  激活方面接口参数没有任何变化

人证 2.0 :

IdCardVerifyManager.getInstance().active(Context context, String appId, String sdkKey);

ArcFace 3.0 :

FaceEngine.active(Context context, String appId, String sdkKey);
 3.2 引擎初始化:

  从初始化开始,人证 2.0与ArcFace3.0接口有了较大的区别,人证 2.0有对Id Card信息与Camera信息监听,而3.0取消了这个监听机制,接口内的参数就不一一介绍了,官方文档介绍的非常详细,大家可以去参考一下官方文档。

人证 2.0 :

IdCardVerifyManager.getInstance().init(Context context, IdCardVerifyListener listener) 

ArcFace 3.0 :

FaceEngine.init(Context context, DetectMode detectMode, DetectFaceOrientPriority detectFaceOrientPriority, int detectFaceScaleVal, int detectFaceMaxNum, int combinedMask)
 3.3 激活&初始化demo:

  下面是我对2.0进行替换后的前后代码,可以给大家做一个参考:

人证 2.0 :

  private void initEngine() {
        int result = IdCardVerifyManager.getInstance().init(this, idCardVerifyListener);
        LogUtils.dTag(TAG, "initResult: " + result);
        if (result == IdCardVerifyError.MERR_ASF_NOT_ACTIVATED) {
            Executors.newSingleThreadExecutor().execute(new Runnable() {
                @Override
                public void run() {
                    int activeResult = IdCardVerifyManager.getInstance().active(
                            MainActivity.this, APP_ID, SDK_KEY);
                    runOnUiThread(new Runnable() {
                        @Override
                        public void run() {
                            LogUtils.dTag(TAG, "activeResult: " + activeResult);
                            if (activeResult == IdCardVerifyError.OK) {
                                int initResult = IdCardVerifyManager.getInstance().init(
                                        MainActivity.this, idCardVerifyListener);
                                LogUtils.dTag(TAG, "initResult: " + initResult);
                                if (initResult != IdCardVerifyError.OK) {
                                    toast("人证引擎初始化失败,错误码: " + initResult);
                                }
                            } else {
                                toast("人证引擎激活失败,错误码: " + activeResult);
                            }
                        }
                    });
                }
            });
        } else if (result != IdCardVerifyError.OK) {
            toast("人证引擎初始化失败,错误码: " + result);
        }
    }

ArcFace 3.0 :

 private void initEngine() {
        int result = faceEngine.init(this, DetectMode.ASF_DETECT_MODE_VIDEO, DetectFaceOrientPriority.ASF_OP_ALL_OUT, 16, 1,
                FaceEngine.ASF_FACE_DETECT | FaceEngine.ASF_FACE_RECOGNITION);
        LogUtils.dTag(TAG, "initResult: " + result);
        if (result == ErrorInfo.MERR_ASF_NOT_ACTIVATED) {
            Executors.newSingleThreadExecutor().execute(() -> {
                int activeResult = FaceEngine.active(
                        MainActivity.this, Constants.APP_ID, Constants.SDK_KEY);
                runOnUiThread(() -> {
                    LogUtils.dTag(TAG, "activeResult: " + activeResult);
                    if (activeResult == ErrorInfo.MOK) {
                        int initResult = faceEngine.init(this, DetectMode.ASF_DETECT_MODE_VIDEO, DetectFaceOrientPriority.ASF_OP_ALL_OUT, 16, 1,
                                FaceEngine.ASF_FACE_DETECT | FaceEngine.ASF_FACE_RECOGNITION);
                        LogUtils.dTag(TAG, "initResult: " + initResult);
                        if (initResult != ErrorInfo.MOK) {
                            toast("人证引擎初始化失败,错误码: ", initResult));
                        }
                    } else {
                        toast("人证引擎激活失败,错误码: ", activeResult));
                    }
                });
            });
        } else if (result != ErrorInfo.MOK) {
            toast("人证引擎初始化失败,错误码: " , result));
        }
    }
 3.4 证件照部分的识别及特征提取

  证件照部分我们需要将原本2.0的引擎自带的图像处理方法换成3.0包内的ArcSoftImageUtil的方法,同时由于特征提取成功后的回调监听从引擎内删除掉了,所以这个回调需要自己来写,这里我偷了一下懒,抄了一下人证 2.0 demo与3.0 demo中均有的faceHelper中的FaceListener作为监听回调,当然大家也可以自己实现回调。

人证 2.0 :

    private void inputIdCard() {
        if (bmp == null) {
            return;
        }
        int width = bmp.getWidth();
        int height = bmp.getHeight();

        //图像裁剪
        boolean needAdjust = false;
        while (width % 4 != 0) {
            width--;
            needAdjust = true;
        }
        if (height % 2 != 0) {
            height--;
            needAdjust = true;
        }
        if (needAdjust) {
            bmp = ImageUtils.imageCrop(bmp, new Rect(0, 0, width, height));
        }
        //转换为NV21数据格式
        byte[] nv21Data = ImageUtils.getNV21(width, height, bmp);
        //身份证图像数据输入
        DetectFaceResult result = IdCardVerifyManager.getInstance().inputIdCardData(
                nv21Data, width, height);
        LogUtils.dTag(TAG, "inputIdCardData result: " + result.getErrCode());
    }

ArcFace 3.0 :

   private void inputIdCard() {
        if (bmp == null) {
            return;
        }
        //图像4字节对齐 裁剪
        bmp = ArcSoftImageUtil.getAlignedBitmap(bmp, true);
        int width = bmp.getWidth();
        int height = bmp.getHeight();
        //转换为bgr格式
        byte[] bgrData = ArcSoftImageUtil.createImageData(bmp.getWidth(), bmp.getHeight(), ArcSoftImageFormat.BGR24);
        int translateResult = ArcSoftImageUtil.bitmapToImageData(bmp, bgrData, ArcSoftImageFormat.BGR24);
        //转换成功
        if (translateResult == ArcSoftImageUtilError.CODE_SUCCESS) {
            List<FaceInfo> faceInfoList = new ArrayList<>();
            //video模式不适合静态图片检测,这里新建了一个idFaceEngine 除了检测模式修改为Image其他参数与faceEngine一样
            int detectResult = idFaceEngine.detectFaces(bgrData, width, height, FaceEngine.CP_PAF_BGR24, faceInfoList);
            if (detectResult == ErrorInfo.MOK && faceInfoList.size() > 0) {
                //这里的-2为trackID  因为Camera与证件照提取共用faceHelper 用trackID区分是哪边来的数据
                faceHelper.requestFaceFeature(bgrData, faceInfoList.get(0), width, height, FaceEngine.CP_PAF_BGR24, -2);
            }
        } else {
            LogUtils.dTag(TAG, "translate Error result: " + translateResult);
        }
    }
 3.5 Camera部分的识别及特征提取

  人证2.0的onPreviewData接口内部其实是存在一个特征提取保护,即上一个特征提取未完成前,不能进行下一个特征提取,但是在3.0没有外部的封装了,所以我们要自己来进行特征提取的控制,基础的策略就是根据trackId,每一个trackId若未进行提取或提取失败才会进行特征提取。

人证 2.0 :

   public void onPreview(byte[] nv21, Camera camera) {
                if (faceRectView != null) {
                    faceRectView.clearFaceInfo();
                }
                if (nv21 == null) {
                    return;
                }
                //预览数据传入
                DetectFaceResult result = IdCardVerifyManager.getInstance().onPreviewData(nv21,
                        previewSize.width, previewSize.height, true);
                Rect rect = result.getFaceRect();

                if (faceRectView != null && drawHelper != null && rect != null) {
                    //生成实时人脸框
                    drawHelper.draw(faceRectView, new DrawInfo(drawHelper.adjustRect(rect), "", Color.YELLOW));
                }
            }

ArcFace 3.0 :

   public void onPreview(byte[] nv21, Camera camera) {
                if (faceRectView != null) {
                    faceRectView.clearFaceInfo();
                }
                if (nv21 == null) {
                    return;
                }
                List<FaceInfo> faceInfoList = new ArrayList<>();
                int ftResult = faceEngine.detectFaces(nv21, previewSize.width, previewSize.height, FaceEngine.CP_PAF_NV21, faceInfoList);
                //人证比对场景下只有最大人脸有效,因此直接取第一个人脸即可,若有其他场景可以自行调整
                if (ftResult == ErrorInfo.MOK && faceInfoList.size() > 0) {
                    Rect rect = faceInfoList.get(0).getRect();
                    if (faceRectView != null && drawHelper != null && rect != null) {
                        drawHelper.draw(faceRectView, new DrawInfo(drawHelper.adjustRect(rect), "", Color.YELLOW));
                    }
                    //等待身份证数据准备完毕后,才开始对Camera的数据进行特征提取 并根据trackId防止重复提取
                    int trackId = faceInfoList.get(0).getFaceId();
                    if (isIdCardReady && requestFeatureStatusMap != null && requestFeatureStatusMap.containsKey(trackId)) {
                        //若一个人脸提取失败则进行重试
                        if (requestFeatureStatusMap.get(trackId) == null || requestFeatureStatusMap.get(trackId) == RequestFeatureStatus.FAILED) {
                            requestFeatureStatusMap.put(trackId, RequestFeatureStatus.SEARCHING);
                            faceHelper.requestFaceFeature(nv21, faceInfoList.get(0), previewSize.width, previewSize.height, FaceEngine.CP_PAF_NV21, faceInfoList.get(0).getFaceId());
                        }
                    }
                }
            }
  3.6 camera及idCard数据回调

  上文我们已经提到过,人证 2.0的引擎内对camera数据idCard数据分别有两个接口作为区分,同时有两个回调函数分别用于两个数据的处理。而ArcFace3.0时不仅取消了回调,而且camera数据idCard数据共用一个detect、extractFaceFeature,所以我们可以采用trackId来作为区分,并且因为引擎的变化,引擎内不再存储特征值,导致我们需要记录两个数据源处获得的特征值。

人证 2.0 :

 private IdCardVerifyListener idCardVerifyListener = new IdCardVerifyListener() {
        @Override
        public void onPreviewResult(DetectFaceResult detectFaceResult, byte[] bytes, int i, int i1) {
            runOnUiThread(() -> {
                //预览人脸特征提取成功
                if (detectFaceResult.getErrCode() == IdCardVerifyError.OK) {
                    isCurrentReady = true;
                    compare();
                }
            });
        }

        @Override
        public void onIdCardResult(DetectFaceResult detectFaceResult, byte[] bytes, int i, int i1) {
            LogUtils.dTag(TAG, "onIdCardResult: " + detectFaceResult.getErrCode());
            runOnUiThread(() -> {
                //身份证人脸特征提取成功
                if (detectFaceResult.getErrCode() == IdCardVerifyError.OK) {
                    isIdCardReady = true;
                    restartHandler.removeCallbacks(restartRunnable);
                    readHandler.postDelayed(readRunnable, READ_DELAY);
                    ByteArrayOutputStream baos = new ByteArrayOutputStream();
                    bmp.compress(Bitmap.CompressFormat.PNG, 80, baos);
                    byte[] bmpBytes = baos.toByteArray();
                    Glide.with(MainActivity.this).load(bmpBytes).into(ivIdCard);
                    compare();
                }
            });
        }
    };

ArcFace 3.0 :

 FaceListener faceListener = new FaceListener() {
            @Override
            public void onFail(Exception e) {
            }

            @Override
            public void onFaceFeatureInfoGet(@Nullable FaceFeature faceFeature, Integer requestId, Integer errorCode, long frTime, byte[] nv21) {
                //特征提取失败 将比对状态置为失败
                if (ErrorInfo.MOK != errorCode) {
                    requestFeatureStatusMap.put(requestId, RequestFeatureStatus.FAILED);
                    return;
                }
                //requestId 为-2则为身份证数据
                if (requestId == -2) {
                    isIdCardReady = true;
                    //由于接口变更feature不能在引擎内存储 所以用全局变量进行存储
                    idFaceFeature = faceFeature;
                    restartHandler.removeCallbacks(restartRunnable);
                    readHandler.postDelayed(readRunnable, 5000);
                    ByteArrayOutputStream baos = new ByteArrayOutputStream();
                    bmp.compress(Bitmap.CompressFormat.PNG, 100, baos);
                    runOnUiThread(() -> {
                        Glide.with(MainActivity.this).load(bmp).into(ivIdCard);
                        compare();
                    });
                } else {
                    //由于接口变更feature不能在引擎内存储 所以用全局变量进行存储
                    MainActivity.this.faceFeature = faceFeature;
                    isCurrentReady = true;
                    runOnUiThread(() -> {
                        compare();
                    });
                }
            }
        };
  3.7 compare接口

  比对接口相对于之前的来说改动就小很多了,只需要注意一下将比对模式修改为ID_CARD模式即可。

人证 2.0 :

 private void compare() {
        //.......
        //人证特征比对接口
        CompareResult compareResult = IdCardVerifyManager.getInstance().compareFeature(THRESHOLD);
        LogUtils.dTag(TAG, "compareResult: result " + compareResult.getResult() + ", isSuccess "
                + compareResult.isSuccess() + ", errCode " + compareResult.getErrCode());
        if (compareResult.isSuccess()) {
            playSound(R.raw.compare_success);
            ivCompareResult.setBackgroundResource(R.mipmap.compare_success);
            tvCompareTip.setText(name);
        } else {
            playSound(R.raw.compare_fail);
            ivCompareResult.setBackgroundResource(R.mipmap.compare_fail);
            tvCompareTip.setText(R.string.tip_retry);
        }
        //.......
    }

ArcFace 3.0 :

   private void compare() {
        //.......
        //人证特征比对接口
        FaceSimilar compareResult = new FaceSimilar();
        faceEngine.compareFaceFeature(idFaceFeature, faceFeature, CompareModel.ID_CARD, compareResult);
        //人证比对阈值为0.82
        if (compareResult.getScore() > 0.82) {
            playSound(R.raw.compare_success);
            ivCompareResult.setBackgroundResource(R.mipmap.compare_success);
            tvCompareTip.setText(name);
        } else {
            playSound(R.raw.compare_fail);
            ivCompareResult.setBackgroundResource(R.mipmap.compare_fail);
            tvCompareTip.setText(R.string.tip_retry);
        }
        //.......
    }
 3.8 结果展示

  至此只要将人证 2.0 demo无用的代码删除掉,我们就将2.0成功升级为3.0了,让我们看看部队成功后的运行截图。

Use face recognition SDK witnesses than to realize the whole process

三、ArcFace 3.0的demo修改为人证程序

  相比较于将人证 2.0升级为将ArcFace3.0来说,直接在将ArcFace3.0版本上进行修改可简单太多了,毕竟不用将所有的接口全部都更改一遍,我们需要做的就只是增加人证部分的输入,人证部分的回调以及比对的逻辑。因此在这里我强烈推荐直接上手ArcFace3.0,如果不是有特殊原因修改3.0可比人证2.0快太多了。

修改界面选择

  首先我们要选择demo中的一个Activity做为我们修改的模板,我看了一下RegisterAndRecognizeActivity是我认为最为合适的了,因为它的Camera的比对流程已经全部完成了,我们需要做的就是两点:

  • 增加Id Card数据输入源
    Id Card数据输入源我们采用与人证demo相同的方式模拟证件信息传入,因此可以完全套用inputIdCard方法。
 public void onClickIdCard(View view) {
        //模拟身份证姓名,可修改
        FileInputStream fis;
        //身份证图像数据
        bmp = null;
        try {
            //模拟身份证图像数据来源,可修改
            fis = new FileInputStream(SAMPLE_FACE);
            bmp = BitmapFactory.decodeStream(fis);
            fis.close();
        } catch (Exception e) {
            e.printStackTrace();
        }
        inputIdCard();
    }

    private void inputIdCard() {
        if (bmp == null) {
            return;
        }
        //图像4字节对齐 裁剪
        bmp = ArcSoftImageUtil.getAlignedBitmap(bmp, true);
        int width = bmp.getWidth();
        int height = bmp.getHeight();
        //转换为bgr格式
        byte[] bgrData = ArcSoftImageUtil.createImageData(bmp.getWidth(), bmp.getHeight(), ArcSoftImageFormat.BGR24);
        int translateResult = ArcSoftImageUtil.bitmapToImageData(bmp, bgrData, ArcSoftImageFormat.BGR24);
        //转换成功
        if (translateResult == ArcSoftImageUtilError.CODE_SUCCESS) {
            List<FaceInfo> faceInfoList = new ArrayList<>();
            //video模式不适合静态图片检测,我们选择frEngine 作为检测证件照的引擎 初始化时要增加 FaceEngine.ASF_FACE_DETECT 哦
            int detectResult = frEngine.detectFaces(bgrData, width, height, FaceEngine.CP_PAF_BGR24, faceInfoList);
            if (detectResult == ErrorInfo.MOK && faceInfoList.size() > 0) {
                //这里的-2为trackID  因为Camera与证件照提取共用faceHelper 用trackID区分是哪边来的数据
                faceHelper.requestFaceFeature(bgrData, faceInfoList.get(0), width, height, FaceEngine.CP_PAF_BGR24, -2);
            }
        } else {
            LogUtils.dTag(TAG, "translate Error result: " + translateResult);
        }
    }
  • 修改比对的底库

  由于绝大部分场景下,人证比对都是1:1进行对比的,因而要在onFaceFeatureInfoGet回调内进行调整。首先通过我们在上面inputIdCard铺垫的以-2为trackID,作为标识身份证数据的手段。其次我们要记录一下要对比的身份证feature与camera下的人脸feature信息,这里我们采用全局变量的方式进行记录。最后由于比对的feature获取会有前后顺序区分,我们用一个状态位进行记录(当然也可以判断两个feature是否有数据,对此数据进行维护来进行两边数据的同步),等待两边的数据都准备完毕后,就可以进行比对了。

      public void onFaceFeatureInfoGet(@Nullable final FaceFeature faceFeature, final Integer requestId, final Integer errorCode) {
                //FR成功
                if (faceFeature != null) {
                    //接收身份证数据
                    if (requestId == -2) {
                        isIdCardReady = true;
                        //feature用全局变量进行存储
                        idFaceFeature = faceFeature;
                        compare();
                        return;
                    }
//                    Log.i(TAG, "onPreview: fr end = " + System.currentTimeMillis() + " trackId = " + requestId);
                    Integer liveness = livenessMap.get(requestId);
                    //不做活体检测的情况,直接搜索
                    if (!livenessDetect) {
                        isCurrentReady = true;
                        //防止对同一个人脸进行多次特征提取
                        requestFeatureStatusMap.put(requestId, RequestFeatureStatus.SUCCEED);
                        compare();
//                        searchFace(faceFeature, requestId);
                    }
                    //活体检测通过,搜索特征
                    else if (liveness != null && liveness == LivenessInfo.ALIVE) {
                        isCurrentReady = true;
                        //防止对同一个人脸进行多次特征提取
                        RegisterAndRecognizeActivity.this.faceFeature = faceFeature;
                        requestFeatureStatusMap.put(requestId, RequestFeatureStatus.SUCCEED);
                        compare();
//                        searchFace(faceFeature, requestId);
                    }
                    //活体检测未出结果,或者非活体,延迟执行该函数
                    else {
                        //......
                    }
                }
                //特征提取失败
                else {
                   //.........
                }
            }

            @Override
            public void onFaceLivenessInfoGet(@Nullable LivenessInfo livenessInfo, final Integer requestId, Integer errorCode) {
                //.....
            }
        };
  • compare 函数:
    private void compare() {
        if (isCurrentReady && isIdCardReady) {
            FaceSimilar similar = new FaceSimilar();
            int compareResult = frEngine.compareFaceFeature(idFaceFeature, faceFeature, CompareModel.ID_CARD, similar);
            if (compareResult == ErrorInfo.MOK && similar.getScore() > 0.82) {
                Log.i(TAG, "compare: success");
            } else {
                Log.i(TAG, "compare: fail");
            }
            //比对完成后重置比对状态
            isIdCardReady = false;
            isCurrentReady = false;
            //给同一个人脸若比对后仍想尝试,允许其进行特征提取
            requestFeatureStatusMap.clear();
        }
    }
小结

  使用ArcFace3.0进行修改,可以明显的感觉到修改“丝滑”了很多,我们在原代码的基础上只需要注意Id Card的数据输入,以及比对前后的逻辑即可,比对的难度几乎可以忽略不计,只是简单的调用接口而已。我这里也写的比较简单,有些业务逻辑如:增加身份证数据有效时间;规定双方数据强制的先后顺序;界面部分的展示都没有做,只打印了一下比对的结果。本文只提供思路给大家参考,业务逻辑还是需要自己添加,最后给大家看一下修改完成后运行比对成功的日志。

Use face recognition SDK witnesses than to realize the whole process

结语

  总的来说,虹软的人证SDK还是非常不错的。从识别效果来讲,我的身份证大概是7-8年前拍摄的,与现在的“胖若两人”,但是识别准确率依然很高。从开发成本来说,永久免费真的很吸引人,虹软科技称的上是个良心企业,为它点个赞。从二次开发的接口角度来说,在我接触过的各个SDK中,我个人认为虹软的接口算是很容易上手的一批,而且虹软的接口文档十分详尽,不仅有对每个接口的入参详细的介绍,还有单独接口调用的demo,以及整体流程的demo都为新手上路铺垫好了道路。期间也碰到过一些问题,虹软的客服人员也都很及时很准确的回答了我的问题。期待虹软今后的发展,希望能给我们带来更好的产品!

附录

1. Integrated witnesses 2.0 Demo ArcFace 3.0 SDK demo Address:
https://github.com/1244975831/ArcSoft_IdCardVeriDemo_Android

Demo demo modify human certificate program addresses the 2.ArcFace 3.0:
https://github.com/1244975831/ArcfaceDemo_IdCard_Android

If my demo helpful to you, please remember to star look for my project.

Guess you like

Origin blog.51cto.com/14633836/2463883