基于nao机器人实现语音对话(智能版本)

nao机器人实现语音对话

1、语音获取

nao耳麦有一个功能,它可以通过声音大小判断能力值,也就是声音越大能量越大。所以我们此次项目主要运用的就是nao 的这个功能,来展开实现的。下面是流程图。

  • 功能流程图
    在这里插入图片描述
    我们可以看见上面的流程图,从录音开始到录音结束逻辑还是比较复杂的,而且还有一些我没画出来,这只是大概。
  • 录音的代码
 def recorder(self):
        self.audio_recorder.stopMicrophonesRecording()
        time.sleep(0.2)
        energy = self.energy()
        print(energy['left'])
        self.audio_recorder.startMicrophonesRecording(record_path, "wav", 16000, (0,0,1,0))   
        print("record begin")
        wait = 0
        global flag
        flag = 0
        global flag_one
        flag_one = 0
        while 1:
            energy = self.energy()
            time.sleep(0.1) 
            if energy['left'] < 400 and wait <5:
                print("no body:",energy['left'],float(wait))
                wait +=0.1
                

            elif energy['left'] >600:
                print("have people:",energy['left'],wait)
                wait = 4.7
                continue

            elif wait >= 5:
                print("record over ")
                self.audio_recorder.stopMicrophonesRecording()
                break
            else:
                continue

            if round(wait,1) == 4.0:
                print("int wait:",int(wait))
                self.answer_nao.say("你还有什么要说的吗,没有我要休眠咯")
                self.audio_recorder.stopMicrophonesRecording()
                time.sleep(1)
                self.audio_recorder.startMicrophonesRecording(record_path,"wav",16000,(0,0,1,0))
                wait = 3
                flag = 1
                
                while 1:
                    energy = self.energy()
                    time.sleep(0.1) 
                    if energy['left'] < 400 and wait <5:
                        print("no body:",energy['left'],float(wait))
                        wait +=0.1

                    elif energy['left'] >600:
                        print("have people:",energy['left'],wait)
                        wait = 4.7
                        continue
                    elif round(wait,1) == 4.8:
                        self.answer_nao.say("慢走,期待下次与您相遇")
                        self.audio_recorder.stopMicrophonesRecording()
                        return
                    elif wait >= 5:
                        print("record over ")
                        self.audio_recorder.stopMicrophonesRecording()
                        break
                    else:
                        continue
                msg = listen()
                msg = str(msg)
                print(msg)
                time.sleep(1)
                if "没" in msg:
                    self.answer_nao.say("拜拜")
                    time.sleep(2)
                    break
                elif msg == "None":
                    self.answer_nao.say("很高心跟您对话,期待再次与您相见")
                    time.sleep(2)
                    break
                elif msg == "":
                    self.answer_nao.say("期待再次和您相遇,再见")
                    time.sleep(2)
                    break
                else:
                    flag_one = 2
                    botMsg = turing.botInteraction(msg)
                    test = str(botMsg)
                    answerNao.say(test)
                    time.sleep(0.5)
                    audio.recorder()
            if flag == 1:
                break

nao一共有四个声道,下面的代码是获取声音能量值

  • 获取能量值
def energy(self):
        energy = dict()       
        energy['left'] = self.audio_device.getLeftMicEnergy()
        energy['right'] = self.audio_device.getRightMicEnergy()
        energy['front'] = self.audio_device.getFrontMicEnergy()
        energy['rear'] = self.audio_device.getRearMicEnergy()
        return energy

上面代码就是通过声音能量值来判断是否有人说话,从而判断是否录音的。里面逻辑需要大家自己看看,我说不太清楚。。。

2、录音翻译成文本

得到录音需要用百度语音合成API将音频转换成文字。实现工程如下。
在这里插入图片描述
大家先去申请语音识别的API有了API KEY 和API 密码之后才行。

  • 代码
from aip import AipSpeech
APP_ID = '21xxxxx'
API_KEY = 'O0gzDUHKkciBa60Vxxxxx'
SECRET_KEY = 'Psji0dC90D1OehYh63ZaQuc7xxxxxxx'
client = AipSpeech(APP_ID, API_KEY, SECRET_KEY)

def listen():
  with open(record_path, 'rb') as fp:
    voices = fp.read()
  try:
   
    result = client.asr(voices, 'wav', 16000, {
    
    'dev_pid': 1537, })
    result_text = result["result"][0]
    result_text = result_text.replace(',','')
    result_text = result_text.replace('.','')
    return result_text
  except KeyError:
    print("faild")

3、将文本传给华为云

这里主要是将问题传给华为云知识库,看是否能找到匹配的问题,比如我们做的是农业相关的,我就在华为云知识库中添加农业相关的知识。

  • 华为云机器人
    在这里插入图片描述

在这里插入图片描述
在这里插入图片描述

  • 代码
后续添加,代码在机器人里

点击进来我们就可以看见知识库了。可以自己添加。也可以添加技能,这个就要自己对华为云机器人熟悉了,我就不够多的阐述。

4、传给图灵机器人

如果华为云知识库没有的问题就可以传给图灵机器人人了,因为图灵机器人闲聊比华为云机器人好一些。
在这里插入图片描述
在这里插入图片描述
创建之后也可以添加问题,不过没有华为云的好。

  • 代码
def __init__(self):
        self.turing_url = 'http://www.tuling123.com/openapi/api?'
    def botInteraction (self,text):
      
        url_data = dict(
            key = 'e7ea86036040426e8a9d123176bfe12f',
            info = text,
            userid = 'yjc',
        )
      
        self.request = Request(self.turing_url + urlencode(url_data))
        try:
            w_data = urlopen(self.request)
        except URLError:
            raise Exception("No internet connection available to transfer txt data")
        except:
            raise KeyError("Server wouldn't respond (invalid key or quota has been maxed out)")
        response_text = w_data.read().decode('utf-8')
        
        json_result = json.loads(response_text)
        return json_result['text']

然后就可以语音把传回来的就可以拿回来语音播报了。

5、源码

# -*- coding:utf-8 -*-
#!/usr/bin/env python  
import argparse
from naoqi import ALProxy
import wave
import json
import sys
import os
import paho.mqtt.client as mqtt
import time
import requests
import re
import json
from time import sleep
import random
import json
import sys
import qi
import time
import tempfile
import requests


from scipy.io import wavfile
tts = audio = record = aup = None 
record_path = '/home/nao/record.wav'
from aip import AipSpeech
from urllib2 import urlopen,Request
from urllib2 import URLError
from urllib import urlencode
reload(sys)
sys.setdefaultencoding('utf-8')
APP_ID = '21715692'
API_KEY = 'O0gzDUHKkciBa60VddBgzuO1'
SECRET_KEY = 'Psji0dC90D1OehYh63ZaQuc7UPA8soxb'
username = 'h_y8689'
user_demain_id = '0a37c79c8300f3840f9cc0137d392600'
project_name = 'cn-north-4'
project_domain_id = '0a37c81fbe00f38b2f0ac0135b8e3f93'
password = 'hjy123456789'
client = AipSpeech(APP_ID, API_KEY, SECRET_KEY)
sys.setdefaultencoding('utf-8')
global flag_two
flag_two = 0
sys.path.append(os.path.abspath(os.path.dirname(__file__) + '/' + '..'))
sys.path.append("..")
TASK_TOPIC = 'test' 
client_id = time.strftime('%Y%m%d%H%M%S',time.localtime(time.time()))
client = mqtt.Client(client_id, transport='tcp')
client.connect("59.110.42.24", 1883, 60)  
client.loop_start()

def clicent_main(message: str):
   
   time_now = time.strftime('%Y-%m-%d %H-%M-%S', time.localtime(time.time()))
   payload = {
    
    "msg": "%s" % message, "data": "%s" % time_now}
   # publish(主题:Topic; 消息内容)
   client.publish(TASK_TOPIC, json.dumps(payload, ensure_ascii=False))
   print("Successful send message!")
   return True

class Audio:
   def __init__(self, audio_recorder, audio_device, answer_nao):
       self.audio_recorder = audio_recorder
       self.audio_device = audio_device
       self.answer_nao = answer_nao
       self.data_result = None
           
   def recorder(self):
       self.audio_recorder.stopMicrophonesRecording()
       time.sleep(0.2)
       energy = self.energy()
       print(energy['left'])
       self.audio_recorder.startMicrophonesRecording(record_path, "wav", 16000, (0,0,1,0))   
       print("record begin")
       wait = 0
       global flag
       flag = 0
       global flag_one
       flag_one = 0
       while 1:
           energy = self.energy()
           time.sleep(0.1) 
           if energy['left'] < 400 and wait <5:
               print("no body:",energy['left'],float(wait))
               wait +=0.1
               

           elif energy['left'] >600:
               print("have people:",energy['left'],wait)
               wait = 4.8
               continue

           elif wait >= 5:
               print("record over ")
               self.audio_recorder.stopMicrophonesRecording()
               break
           else:
               continue

           if round(wait,1) == 4.0:
               print("int wait:",int(wait))
               self.answer_nao.say("你还有什么要说的吗,没有我要休眠咯")
               self.audio_recorder.stopMicrophonesRecording()
               time.sleep(1)
               self.audio_recorder.startMicrophonesRecording(record_path,"wav",16000,(0,0,1,0))
               wait = 2
               flag = 1
               while 1:
                   energy = self.energy()
                   time.sleep(0.1) 
                   if energy['left'] < 400 and wait <5:
                       print("no body:",energy['left'],float(wait))
                       wait +=0.1

                   elif energy['left'] >600:
                       print("have people:",energy['left'],wait)
                       wait = 4.9
                       continue
                   elif wait >= 5:
                       print("record over ")
                       self.audio_recorder.stopMicrophonesRecording()
                       break
                   else:
                       continue
                   if round(wait,1)==4.6:
                       self.answer_nao.say("期待与您再次相遇")
                       self.audio_recorder.stopMicrophonesRecording()
                       flag_two = 2
                       return
               msg = listen()
               msg = str(msg)
               print(msg)
               time.sleep(1)
               if "没" in msg:
                   self.answer_nao.say("拜拜")
                   time.sleep(2)
                   break
               elif msg == "None":
                   self.answer_nao.say("很高心跟您对话,期待再次与您相见")
                   time.sleep(2)
                   break
               elif msg == "":
                   self.answer_nao.say("期待再次和您相遇,再见")
                   time.sleep(2)
                   break
               else:
                   flag_one = 2
                   botMsg = turing.botInteraction(msg)
                   test = str(botMsg)
                   answerNao.say(test)
                   time.sleep(0.5)
                   audio.recorder()
           if flag == 1:
               break

   def energy(self):
       energy = dict()       
       energy['left'] = self.audio_device.getLeftMicEnergy()
       energy['right'] = self.audio_device.getRightMicEnergy()
       energy['front'] = self.audio_device.getFrontMicEnergy()
       energy['rear'] = self.audio_device.getRearMicEnergy()
       return energy



   def answer(self, answer_data):
       self.answer_nao.setLanguage("Chinese")
       self.answer_nao.say(answer_data)


def main(session):
   audioRecorder = session.service('ALAudioRecorder') 
   audioDevice = session.service('ALAudioDevice')
   answerNao = session.service("ALTextToSpeech") 
   audio = Audio(audioRecorder, audioDevice, answerNao)
   audio.recorder()

   try:
       pass
   except Exception, errorMsg:
       print str(errorMsg)
       exit()


class TuringChatMode(object):

   def __init__(self):
       self.turing_url = 'http://www.tuling123.com/openapi/api?'
   def botInteraction (self,text):
     
       url_data = dict(
           key = 'e7ea86036040426e8a9d123176bfe12f',
           info = text,
           userid = 'yjc',
       )
     
       self.request = Request(self.turing_url + urlencode(url_data))
       try:
           w_data = urlopen(self.request)
       except URLError:
           raise Exception("No internet connection available to transfer txt data")
       except:
           raise KeyError("Server wouldn't respond (invalid key or quota has been maxed out)")
       response_text = w_data.read().decode('utf-8')
       
       json_result = json.loads(response_text)
       return json_result['text']

def main(robot_IP, robot_PORT=9559):
   global tts, audio, record, aup 
   tts = ALProxy("ALTextToSpeech", robot_IP, robot_PORT)
   record = ALProxy("ALAudioRecorder", robot_IP, robot_PORT)
   aup = ALProxy("ALAudioPlayer", robot_IP, robot_PORT)
   print 'start recording...'
   record.startMicrophonesRecording(record_path, 'wav', 16000, (0,0,1,0))
   time.sleep(6)
   record.stopMicrophonesRecording()
   print 'record over'  
   
def huawei(msg):
   url1 = 'https://iam.cn-north-4.myhuaweicloud.com/v3/auth/tokens'
   header ={
    
    

       'Content-Type': 'application/json;charset=utf8' 
   }
   data = {
    
     
       "auth": {
    
     
           "identity": {
    
     
               "methods": [ 
                   "password" 
               ], 
               "password": {
    
     
                   "user": {
    
     
                       "name": "h_y8689", 
                       "password": "hjy123456789",
                       "domain": {
    
     
                           "name": "h_y8689" 
                       } 
                   } 
               } 
           }, 
           "scope": {
    
     
               "project": {
    
     
                   "name": "cn-north-4" 
               } 
           } 
       } 
   }
   global a
   a = 0
   res1 = requests.post(url1,data=json.dumps(data),headers  =header)
   res1 = res1.headers['X-Subject-Token']
   #print("token:",res1[0:10])

   # url2 = 'https://cbs-ext.cn-north-4.myhuaweicloud.com/v1/0a37c81fbe00f38b2f0ac0135b8e3f93/qabots/5c71f659-3bc3-4f4b-8b1c-4125fcff7233/suggestions'
   Request_Header = {
    
    
        'Content-Type': 'application/json',
        'X-Auth-Token' :res1
   }
   url2 = 'https://cbs-ext.cn-north-4.myhuaweicloud.com/v1/0a37c81fbe00f38b2f0ac0135b8e3f93/qabots/5c71f659-3bc3-4f4b-8b1c-4125fcff7233/sessions'
   res_2 = requests.post(url2, headers  = Request_Header)
   #print(res_2.text)
   res_2 = json.loads(res_2.text)
   def ques(que,res_2):
           url4 = 'https://cbs-ext.cn-north-4.myhuaweicloud.com/v1/0a37c81fbe00f38b2f0ac0135b8e3f93/qabots/5c71f659-3bc3-4f4b-8b1c-4125fcff7233/sessions/{}'.format(res_2['session_id'])
           body = {
    
    
               'question' : que,
               'top' : '1',
               'tag_ids' : 'nao',
               'domain_ids' : 'nao',
               'chat_enable': 'true'

           }
           res_4 = requests.post(url4, data=json.dumps(body), headers  = Request_Header)
           res_4 = json.loads(res_4.text)
           return res_4
   que = msg
   res_4 = ques(que, res_2)
   print(res_4)
   if(res_4['reply_type'] == 0):
       #print(float(res_4['qabot_answers']['answers'][0]['score']))
       if(float(res_4['qabot_answers']['answers'][0]['score']) < 0.8):
           a = 1
           print(1)
           url5 = 'https://cbs-ext.cn-north-4.myhuaweicloud.com/v1/0a37c81fbe00f38b2f0ac0135b8e3f93/qabots/5c71f659-3bc3-4f4b-8b1c-4125fcff7233/sessions/{}'.format(res_2['session_id'])
           res_5 = requests.delete(url5, headers  = Request_Header)
           return 
       else:
       print("2")
           answerNao.say(res_4['qabot_answers']['answers'][0]['answer'])
           url5 = 'https://cbs-ext.cn-north-4.myhuaweicloud.com/v1/0a37c81fbe00f38b2f0ac0135b8e3f93/qabots/5c71f659-3bc3-4f4b-8b1c-4125fcff7233/sessions/{}'.format(res_2['session_id'])
           res_5 = requests.delete(url5, headers  = Request_Header)
           return

   else:
       print(3)
       a = 1
       #answerNao.say(res_4['chat_answers']['answer'])
   url5 = 'https://cbs-ext.cn-north-4.myhuaweicloud.com/v1/0a37c81fbe00f38b2f0ac0135b8e3f93/qabots/5c71f659-3bc3-4f4b-8b1c-4125fcff7233/sessions/{}'.format(res_2['session_id'])
       res_5 = requests.delete(url5, headers  = Request_Header)
   	return

   
def listen():
 with open(record_path, 'rb') as fp:
   voices = fp.read()
 try:
  
   result = client.asr(voices, 'wav', 16000, {
    
    'dev_pid': 1537, })
   result_text = result["result"][0]
   result_text = result_text.replace(',','')
   result_text = result_text.replace('.','')
   return result_text
 except KeyError:
   print("faild")
   
if __name__ == "__main__":
   parser = argparse.ArgumentParser()
   parser.add_argument("--ip", type=str, default="192.168.1.89", help="Robot ip address")
   parser.add_argument("--port", type=int, default=9559, help="Robot port number")
   args = parser.parse_args()
   session = qi.Session()
   try:
       session.connect("tcp://" + args.ip + ":" + str(args.port))
   except RuntimeError:
       print("Can't connect to Naoqi at ip "" + args.ip + "" on port " + str(args.port) +
             "Please check your script arguments. Run with -h option for help.")
       sys.exit(1)
   turing = TuringChatMode()
   audioRecorder = session.service('ALAudioRecorder') 
   audioDevice = session.service('ALAudioDevice')
   answerNao = session.service("ALTextToSpeech") 
   audio = Audio(audioRecorder, audioDevice, answerNao)
   answerNao.setLanguage("Chinese") 
   print("enter xunhuan")
   while 1:
       energy = audio.energy()
       print(energy['left'])
       if energy['left']>2000:
           answerNao.say("你好,很高兴认识你")
           time.sleep(0.5)
           
           while 1:
               audio.recorder()
               msg = listen()
               msg = str(msg)
               if (len(msg) <= 1):
                   break
               if "拜" in msg:
                   answerNao.say("期待下次相遇")
                   time.sleep(1)
                   break
               if "再见" in msg:
                   answerNao.say("期待下次相遇")
                   time.sleep(1)
                   break
   if “开灯”in msg:
                   clicent_main("打开")
   if “开灯”in msg:
                   clicent_main("关灯")
               if flag == 1:
                   break
               if flag_two == 2:
                   break
               huawei(msg)
               print(a)
               if (a != 1):
                   continue
   
               botMsg = turing.botInteraction(msg)
               test = str(botMsg)
               answerNao.say(test)
               time.sleep(0.5)

猜你喜欢

转载自blog.csdn.net/qq_45125250/article/details/109287256