Ultra-simple integration of HMS Core voice recognition service to create a new experience of security management

Preface

  Recently, I saw the news that the Massachusetts Institute of Technology has developed an AI model that can recognize the cough of people infected with new coronary pneumonia. By analyzing the cough recordings, it can distinguish between asymptomatic infected people and healthy people.

  Does it feel that AI technology is magical and powerful? It can distinguish between asymptomatic infected people and healthy people just by coughing.

  In fact, AI voice recognition technology has been increasingly used in security detection scenarios.

  Next, follow the editor to learn about Huawei's voice recognition service.

Service Introduction

  Huawei's voice recognition service detects sound events in the environment through online (real-time recording) mode, and based on the detected sound events, helps developers to perform follow-up command actions, such as notifying users of ongoing events through mobile phone software and reminding users to do Show the corresponding behavior and reaction.

  Huawei voice recognition service currently supports 13 types of voice event detection, mainly including:

  • laughter

  • Baby crying

  • Snoring

  • Sneeze

  • Shout

  • mew

  • Barking

  • Sound of running water (including the sound of water taps, streams, waves, etc.)

  • Car horn

  • Door bell

  • knocking

  • Fire alarm sounds (including fire alarms, smoke alarms, etc.)

  • Alarm sounds (including fire truck alarms, ambulance alarms, police car alarms, air defense alarms, etc.)

Application scenarios

  Huawei's voice recognition service can be used in scenarios such as hearing impaired assistance, health statistics, and infant care. It has a wide range of applications and can improve user experience and safety.

  For example, with the help of voice recognition services, the hearing impaired can quickly learn about events happening around them, and can quickly respond to dangerous environments such as fires, sirens, screams, and running water.

  Parents of infants and toddlers can know the status of infants and toddlers at any time through the voice recognition service. After receiving the notification of the cry of infants and toddlers from the mobile phone application, they can take care of the infants and toddlers in a short time without being there at all times.

  In addition, we can also use voice recognition services to detect and record data such as snoring and sneezing in real time, and analyze and count health conditions.

  Huawei's voice recognition service is easy to operate, providing API interfaces and SDK packages. Developers can carry out subsequent development by simply calling the interface.

Development steps

1 Configure AppGallery Connect.

  Before developing an application, you need to configure relevant information in AppGallery Connect.

  For specific steps, please refer to the link below:

https://developer.huawei.com/consumer/cn/doc/development/HMSCore-Guides-V5/config-agc-0000001050990353-V5

2 Configure the Maven warehouse address of the HMS Core SDK and complete the SDK integration of this service.

2.1 Open the Android Studio project-level "build.gradle" file.
Insert picture description here

2.2 Add HUAWEI agcp plugin and Maven code base.

  • Configure the Maven repository address of HMS Core SDK in "allprojects> repositories".

  • Configure the Maven warehouse address of the HMS Core SDK in "buildscript> repositories".

  • If the "agconnect-services.json" file is added to the App, you need to add the agcp configuration in "buildscript> dependencies".
buildscript {
    repositories {
        google()
        jcenter()
        maven {url 'https://developer.huawei.com/repo/'}
    }
    dependencies {
        ...
        classpath 'com.huawei.agconnect:agcp:1.4.1.300'
    }
}

allprojects {
    repositories {
        google()
        jcenter()
        maven {url 'https://developer.huawei.com/repo/'}
    }
}

3 Create a voice recognition example.

MLSoundDector soundDector = MLSoundDector.createSoundDector();

4 Create a voice recognition result callback to obtain the detection result and pass the callback to the voice recognition instance.

private MLSoundDectListener listener = new MLSoundDectListener() {
    @Override    
    public void onSoundSucce***esult(Bundle result) {
        //识别成功的处理逻辑,识别结果为:0-12(对应MLSoundDectConstants.java中定义的以SOUND_EVENT_TYPE开头命名的13种声音类型)。
        int soundType = result.getInt(MLSoundDector.RESULTS_RECOGNIZED);    
    }
    @Override    
    public void onSoundFailResult(int errCode) {
        //识别失败,可能没有授予麦克风权限(Manifest.permission.RECORD_AUDIO)等异常情况。
    }
};
soundDector.setSoundDectListener(listener);

5 Start recognition.

boolean isStarted = soundDector.start(context); //context 是上下文
//isStared 等于true表示启动识别成功、isStared等于false表示启动识别失败(原因可能是手机麦克风被系统或其它三方应用占用)

6 Stop recognition.

soundDector.stop();

7 End of recognition, release resources.

soundDector.destroy();

DEMO presentation

Insert picture description here

For more details, please refer to:

Official website of Huawei Developer Alliance:https://developer.huawei.com/consumer/cn/hms

Obtain development guidance documents:https://developer.huawei.com/consumer/cn/doc/development

To participate in developer discussions, please go to the Reddit community:https://www.reddit.com/r/HMSCore/

To download the demo and sample code, please go to Github:https://github.com/HMS-Core

To solve integration problems, please go to Stack Overflow:https://stackoverflow.com/questions/tagged/huawei-mobile-services?tab=Newest


Original link:
https://developer.huawei.com/consumer/cn/forum/topic/0201411999326170397?fid=18
Author: say hi

Guess you like

Origin blog.51cto.com/14772288/2561948