Ultra-simple integration of HMS Core ML Kit scene recognition to build a new mode of album management

Preface

"Let me show you the photos I took before going to the scenic spot. The scenery is very good"

"Yeah, I happen to be going out to play too, share it soon"

……

"What about the photo, haven't you found it?"

"Wait a minute, there are too many photos in the phone, give me some time to find them"

Is this the norm for many people?

Looking at hundreds or even thousands of photos on the phone, trying to find a specific photo is like finding a needle in a haystack, which takes time and effort. Is it only possible to browse through the album from beginning to end, and can’t search according to the category of items in the photo?

Of course, the scene recognition function of Huawei's machine learning service can accurately classify the photos by identifying and labeling the items in the pictures to create a smart album. With this function, we can quickly locate and find the target photo.

Features

Huawei's scene recognition service supports classifying the scene content of pictures and adding annotation information, such as food, flowers, green plants, cats, dogs, kitchens, mountains, washing machines and other 102 scenes, and based on the identified information, build a more intelligent Photo album application experience.

Scene recognition has the following features:

  • Multi-type scene recognition
    supports 102 kinds of scene recognition, and continues to increase.

  • The high recognition accuracy rate
    can recognize a variety of objects and scenes, and the recognition accuracy rate is high.

  • Recognize the fast response speed in
    milliseconds and continuously optimize the performance.

  • Simple and efficient integration
    provides API interface and SDK package to facilitate customer integration, simple operation, and reduce development costs.

Application scenario

In addition to the application of scene recognition in the establishment of smart albums, photo retrieval and classification, it can also automatically select the corresponding scene filters and camera parameters to identify the shooting scene, helping users to take better-looking photos.

Development code

1 Development preparation

1.1 Configure AppGallery Connect.

Before developing an application, you need to configure relevant information in AppGallery Connect.
For specific steps, please refer to the link below:
https://developer.huawei.com/consumer/cn/doc/development/HMSCore-Guides-V5/config-agc-0000001050990353-V5

1.2 Configure the Maven warehouse address of HMS Core SDK and complete the SDK integration of this service.

(1) Open the Android Studio project-level "build.gradle" file.

Insert picture description here
(2) Add HUAWEI agcp plugin and Maven code base.

  • Configure the Maven repository address of HMS Core SDK in "allprojects> repositories".
  • Configure the Maven warehouse address of the HMS Core SDK in "buildscript> repositories".
  • If the "agconnect-services.json" file is added to the App, you need to add the agcp configuration in "buildscript> dependencies".
buildscript {
    repositories {
        google()
        jcenter()
        maven {url 'https://developer.huawei.com/repo/'}
    }
    dependencies {
        ...
        classpath 'com.huawei.agconnect:agcp:1.4.1.300'
    }
}

allprojects {
    repositories {
        google()
        jcenter()
        maven {url 'https://developer.huawei.com/repo/'}
    }
}

2 Development code

Still image detection

2.1 Create an instance of the scene recognition detector.

// 方式1:使用默认的参数配置。
MLSceneDetectionAnalyzer analyzer = MLSceneDetectionAnalyzerFactory.getInstance().getSceneDetectionAnalyzer();
// 方式2:按自定义配置创建场景识别分析器实例。
MLSceneDetectionAnalyzerSetting setting = new MLSceneDetectionAnalyzerSetting.Factory()
     // 设置场景识别可信度阈值。
     .setConfidence(confidence)
     .create();
MLSceneDetectionAnalyzer analyzer = MLSceneDetectionAnalyzerFactory.getInstance().getSceneDetectionAnalyzer(setting);

2.2 Construct MLFrame through android.graphics.Bitmap, supported picture formats include: jpg/jpeg/png/bmp.

MLFrame frame = new MLFrame.Creator().setBitmap(bitmap).create();

2.3 Perform scene recognition.

// 方式1:同步识别。
SparseArray<MLSceneDetection> results = analyzer.analyseFrame(frame);
// 方式2:异步识别。
Task<List<MLSceneDetection>> task = analyzer.asyncAnalyseFrame(frame);
task.addOnSuccessListener(new OnSuccessListener<List<MLSceneDetection>>() {
    public void onSuccess(List<MLSceneDetection> result) {
        // 场景识别成功的处理逻辑。
    }})
    .addOnFailureListener(new OnFailureListener() {
        public void onFailure(Exception e) {
            // 场景识别识别失败的处理逻辑。
            // failure.
            if (e instanceof MLException) {
                MLException mlException = (MLException)e;
                // 获取错误码,开发者可以对错误码进行处理,根据错误码进行差异化的页面提示。
                int errorCode = mlException.getErrCode();
                // 获取报错信息,开发者可以结合错误码,快速定位问题。
                String errorMessage = mlException.getMessage();
            } else {
                // 其他异常。
        }
    }
});

2.4 After the detection is complete, stop the analyzer and release the detection resources.

if (analyzer != null) {
    analyzer.stop();
}

Video stream detection

Developers can process the video stream by themselves, convert the video stream into an MLFrame object, and then perform scene recognition according to the method of static image detection.

If the developer is calling the synchronization detection interface, they can also use the built-in SDK LensEngine class to realize video stream scene recognition. The sample code is as follows:

3.1 Create a scene recognition analyzer. Only the end-side scene recognition analyzer is supported.

MLSceneDetectionAnalyzer analyzer = MLSceneDetectionAnalyzerFactory.getInstance().getSceneDetectionAnalyzer();

3.2 The developer creates the recognition result processing class "SceneDetectionAnalyzerTransactor", which implements the MLAnalyzer.MLTransactor<T> interface, and uses the transactResult method in the interface to obtain the detection results and implement specific services.

public class SceneDetectionAnalyzerTransactor implements MLAnalyzer.MLTransactor<MLSceneDetection> {
    @Override
    public void transactResult(MLAnalyzer.Result<MLSceneDetection> results) {
        SparseArray<MLSceneDetection> items = results.getAnalyseList();
        // 开发者根据需要处理识别结果,需要注意,这里只对检测结果进行处理。
        // 不可调用ML Kit提供的其他检测相关接口。
    }
    @Override
    public void destroy() {
        // 检测结束回调方法,用于释放资源等。
    }
}

3.3 Set the recognition result processor to realize the binding of the analyzer and the result processor.

analyzer.setTransactor(new SceneDetectionAnalyzerTransactor());
// 创建LensEngine,该类由ML Kit SDK提供,用于捕捉相机动态视频流并传入分析器。
Context context = this.getApplicationContext();
LensEngine lensEngine = new LensEngine.Creator(context, this.analyzer)
    .setLensType(LensEngine.BACK_LENS)
    .applyDisplayDimension(1440, 1080)
    .applyFps(30.0f)
    .enableAutomaticFocus(true)
    .create();

3.4 Call the run method, start the camera, read the video stream, and identify.

// 请自行实现SurfaceView控件的其他逻辑。
SurfaceView mSurfaceView = findViewById(R.id.surface_view);
try {
    lensEngine.run(mSurfaceView.getHolder());
} catch (IOException e) {
    // 异常处理逻辑。
}

3.5 After the detection is complete, stop the analyzer and release the detection resources.

if (analyzer != null) {
    analyzer.stop();
}
if (lensEngine != null) {
    lensEngine.release();
}

DEMO exhibition

Insert picture description here


Original link:https://developer.huawei.com/consumer/cn/forum/topic/0201404868263200225?fid=18

Author: say hi

Guess you like

Origin blog.51cto.com/14772288/2551461