uni-app/vue text-to-speech reading (with applet speech recognition and reading)

    There are many ways to implement voice broadcasting. Here I introduce the implementation methods that do not refer to the APIs of Baidu, Ali, or Xunfei.

1. Adopt new SpeechSynthesisUtterance

Don't talk nonsense and go directly to the code

data() {
	return {
		utterThis:null,
	}
},


//方法使用

this.utterThis = new SpeechSynthesisUtterance('');
this.utterThis.pitch = 1; // 音高
this.utterThis.rate = 1; // 语速
this.utterThis.volume = 1; // 音量
this.utterThis.lang = 'zh-CN';
this.utterThis.text =  "要播报的文本内容";
window.speechSynthesis.speak(this.utterThis); //开始朗读

method end event

this.utterThis.onend = () => {  //结束事件   
	window.speechSynthesis.cancel()   //注销合成方法
}

2. Using the speak-tts plug-in

1. Install speak-tts

npm install speak-tts

2. use

import Speech from 'speak-tts'  //引入


初始化调用
this.speechInit();


speechInit(){
	this.speech = new Speech();
	this.speech.setLanguage('zh-CN');
	this.speech.init().then(()=>{
		console.log('语音播报初始化完成')
	})
},


this.speech.speak({text:item.newsContent}).then(()=>{
	this.speech.cancel(); //播放结束后调用
})

3. Wechat applets can use plug-ins provided by Wechat

1. Add plugin 

2. Since I am using uni-app, add configuration to manifest.json

"mp-weixin" : {
        "appid" : "这个是小程序的appid",
        "setting" : {
            "urlCheck" : true,
            "es6" : true,
            "minified" : true,
            "postcss" : false
        },
        "optimization" : {
            "subPackages" : true
        },
        "usingComponents" : true,
        "plugins" : {
            "WechatSI" : {
                "version" : "0.3.5",
                "provider" : "填写刚才申请的appid"
            }
        }
    },

3. Used in the project

//条件编译  引入插件

// #ifdef MP-WEIXIN
var plugin = requirePlugin("WechatSI")
let manager = plugin.getRecordRecognitionManager()
// #endif
// #ifdef MP-WEIXIN
let _this=this 
plugin.textToSpeech({
	lang: "zh_CN",
	tts: true,
	content: playword,
	success: function(res) {
		_this.src = res.filename //这个就是生成成功的音频
		_this.smallRoutine(item,playword,index);
	},
	fail: function(res) {}
})
// #endif


//然后利用uni.createInnerAudioContext() 进行播放
//如果不会使用  请移步 https://uniapp.dcloud.net.cn/api/media/audio-context.html#createinneraudiocontext
this.innerAudioContext = uni.createInnerAudioContext()
this.innerAudioContext.pause();
this.innerAudioContext.volume = 1
this.innerAudioContext.src = this.src
this.innerAudioContext.play()
this.innerAudioContext.onEnded(()=>{
	//监听结束事件  做自己的处理逻辑
})

 4. If the synthesized audio cannot be played, you can query the status code of the development document. (Sometimes the text may be too long to synthesize and report an error, I can provide a way of thinking here, the text is intercepted one by one) 

For example: (all codes will not be posted)

let strlength =this.contentTxt.slice().length;
if(strlength>200){
	this.playAllNum=Math.ceil(strlength / 200); 
	playword = this.contentTxt.slice(0,200) 
}else{
	this.playAllNum=1; 
	playword =this.contentTxt
}

Guess you like

Origin blog.csdn.net/qq_42717015/article/details/131435881