Baidu brain UNIT3.0 Detailed understanding of dialogue embedded technology

       I believe many people have experienced anxiety when there is no phone network, no network can do nothing. The robot will encounter such moments, no offline or network environment is not good, what does not recognize the user in that the user can not respond. In AIoT (AI + IOT) rapid popularization of now, intelligent dialogue scene has penetrated in many industries, the integration of the myriad of smart devices, such as smart home, smart car and so on. Intelligent capabilities of these devices typically rely on online services to implement, but smart devices, especially mobile smart devices there may be no network situation.

  In the future AIoT areas, most scenes are required to self-determination and end with the local response capacity, each device needs to have a separate end computing power does not depend on the cloud, complete local identification smart dialogue, to achieve end computing, so in terms of equipment under what network environment can respond to user interaction. Pain point for this scenario, Baidu launched UNIT 3.0 embedded understanding of the technical dialogue, the use of this technology, semantic recognition capability may be localized non-networked, with local voice recognition capabilities, combined with AI cloud, so that local and cloud effective cooperation in order to meet the needs of the user dialogue anytime, anywhere.

  [Interpretation] understanding of the technology embedded dialogue

  Appreciated that embedded dialog shown above technical framework, provided integrally through the SDK to the developer, the developer may according to their own system, as packaged system application, and then monolithically integrated into their devices. Wherein the inner SDK provides the ability to control the release line, to control the distribution and management of a plurality of scenes. Support for multiple scenarios integrated in the SDK, and provides a common control them in offline distribution, prioritization and other management capabilities.

  Each individual scene consists mainly vertical offline class skills semantic parsing capabilities, including basic analysis, heuristic semantic understanding technology, and other technologies result selection module, and a semantic parser offline models.

  Wherein the analyzing comprises analyzing the characteristics of the query, Paddle model results, general component analysis (word, named entity recognition, etc.) and the like techniques.

  Heuristic semantic understanding technology, comprising a template matching heuristic identification, sample identification matching generalization, the results derived fusion techniques.

  Select module contains the ability to select multiple rounds of non-results.

  The overall program, the log also provides statistical and analytical capabilities for developers to analyze and enhance the effect of use.

  [How to use dialogue to understand the embedded technology]

  Currently, embedded understanding of the technical dialogue has provided Android SDK, follow-up will gradually provide a variety of systems and platforms, such as QNX, Linux and so on. While providing developers can modify and localization training tools, including comprehensive documentation, development caught on the UNIT can download a trial platform.

  UNIT platform into the "innovative technology" area, click on the "semantic parser offline" to enter information, follow the steps, you can download the corresponding development tools and installation package.

  [Four-step access to resources, source-level control]

  Step 1: Determine whether the service is suitable for off-line parsing

  Off (disconnection) at ambient semantic parser, the performance of the terminal device is a certain requirement. Developers must make sure that their business scenarios require the ability to obtain semantic offline, terminal equipment can support the ability to run offline semantics.

  Step Two: Get off the existing scene file, source-level changes

  Skills under UNIT 3.0 has more scenes, you can download the file directly on the corresponding platform. Parse files in some scenes, application developers need to submit messages to obtain UNIT, the official will communicate needs and provide support. Download files skills, developers are subject to change or enhance the skills needs identified performance requirements, can modify the contents of their own templates, provides instructions for the optimal allocation of data on the platform.

  Step Three: Call localize training tools

  Scene file Step two is subject to change, developers need to use training tool for the scene to complete training on their own, after training you can get a new model file. Training tool and method calls, etc. on the instructions UNIT platform, enabling developers to download at any time to access and use.

  Step four: integrated model, use the SDK

  Step three generated model file, in accordance with the requirements on the corresponding SDK directory, developers can download the SDK directly on UNIT platform, in accordance with the instructions call can be used directly.

  [Off-line integration capabilities of deployment scenarios dialogue]

  Different business scenarios will use a different resolution and dialogue capabilities, online services have a better understanding and dialogue are met, offline capabilities to ensure the stability of the core of intelligent interactive devices in a variety of environments; UNIT3.0 provides off-line integration dialogue framework of understanding, developers can according to their own business, the flexibility to choose the offline and online capabilities.

  开发者可以检测自己的终端设备网络信号情况。当信号比较强时,可以统一走在线解析,在线的云端服务器拥有过更好的计算资源,理解效果会更好。当终端信号比较弱时,建议同时走在线和离线的部分,离线SDK会很快给出响应,在线SDK会根据不稳定的网络情况,可能存在一定的延时;此时开发者可以根据自己业务对性能的要求,来选择使用离线的结果,还是等待在线的结果。当终端无网络时,开发者只能选择离线SDK的方案,快速响应用户。

相关文章阅读:

百度大脑UNIT3.0解读之对话式文档问答——上传文档获取对话能力

百度大脑UNIT3.0详解之语音语义一体化方案

百度大脑UNIT3.0详解之数据生产工具DataKit

百度大脑UNIT3.0详解之知识图谱与对话

 

Guess you like

Origin www.cnblogs.com/AIBOOM/p/11571621.html