神经网络语音合成模型介绍

最近一段时间主要做语音合成相关项目,学习了几种端到端神经网络语音合成模型,在这里做一个简要介绍。主要内容如下:

-语音合成简介

-线性谱与梅尔谱

- Tacotron

- Deepvoice 3

- Tacotron 2

- Wavenet

- Parallel Wavenet

- Clarinet

-总结

 

语音合成简介

语音合成,Text To Speech(TTS),顾名思义就是把一段文本转换为语音信号。在人工智能的体系中衔接了自然语言处理与语音技术,在智能音箱,儿童聊天机器人,智能语音客服等语音相关场景中起着非常关键的作用。

语音合成技术从上世纪80年代电脑技术普及后就开始研究,经典的语音合成技术主要基于拼接的方法,然后调整语调,停顿,轻重等韵律参数,涉及语音学,声学等相关知识,对我们半路出家的算法人员来说有着较高的数据及技术门槛。但2017年3月Google提出端到端的tacotron模型[1]后,显著降低了语音合成技术门槛,只要对语音内容文本标注后,就可以用seq2seq框架的模型结构来学习文本与语音频谱直接的映射关系。然后利用Griffin-Lim, WORLD, Wavenet等发声器算法将频谱转换为语音。本文将对主流的几种深度神经网络语音合成模型进行介绍。

 

本系列会用到的引用先放在这里:

References:

[1] Yuxuan Wang, RJ Skerry-Ryan, Daisy Stanton, Yonghui Wu, Ron Weiss, Navdeep Jaitly, Zongheng Yang, Ying Xiao, Zhifeng Chen, Samy Bengio, Quoc Le, Yannis Agiomyrgiannakis, Rob Clark, and Rif A. Saurous. Tacotron: Towards end-to-end speech synthesis. In Interspeech, 2017.

[2] https://haythamfayek.com/2016/04/21/speech-processing-for-machine-learning.html

[3] Sercan Arik, Mike Chrzanowski, Adam Coates, Gregory Diamos, Andrew Gibiansky, Yongguo Kang, Xian Li, John Miller, Jonathan Raiman, Shubho Sengupta, and Mohammad Shoeybi. Deep voice: Real-time neural text-to-speech. arXiv preprint arXiv:1702.07825, 2017.

[4] Jose Sotelo, Soroush Mehri, Kundan Kumar, Joa ̃o Felipe Santos, Kyle Kastner, Aaron Courville, and Yoshua Bengio. Char2Wav: End-to-end speech synthesis. In ICLR2017 workshop submission, 2017.

[5] Jason Lee, Kyunghyun Cho, and Thomas Hofmann. Fully character-level neural machine translation without explicit segmentation. arXiv preprint arXiv:1610.03017, 2016.

[6] Sercan Ömer Arik, Mike Chrzanowski, Adam Coates, Gregory Frederick Diamos, Andrew Gibiansky, Yongguo Kang, Xian Li, John Miller, Andrew Y. Ng, Jonathan Raiman, Shubho Sengupta, Mohammad Shoeybi: Deep Voice: Real-time Neural Text-to-Speech. ICML 2017: 195-204

[7] Sercan Ömer Arik, Gregory F. Diamos, Andrew Gibiansky, John Miller, Kainan Peng, Wei Ping, Jonathan Raiman, Yanqi Zhou: Deep Voice 2: Multi-Speaker Neural Text-to-Speech. CoRR abs/1705.08947 (2017)

[8] Wei Ping, Kainan Peng, Andrew Gibiansky, Sercan O.Arık, Ajay Kannan, Sharan Naran: DEEP VOICE 3: 2000-SPEAKER NEURAL TEXT-TO-SPEECH. CoRR abs/1710.07654 (2017)

[9] Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N. Dauphin. Convolutional sequence to sequence learning. arXiv preprint arXiv:1705.03122v2, 2017.

[10] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin. Attention Is All You Need. arXiv:1706.03762,2017.

[11] J. Shen, R. Pang, R. J. Weiss, M. Schuster, N. Jaitly, Z. Yang, Z. Chen, Y. Zhang, Y. Wang, R. Skerry- Ryan, et al. Natural TTS synthesis by conditioning wavenet on mel spectrogram predictions. In ICASSP, 2018.

[12] A. v. d. Oord, S. Dieleman, H. Zen, K. Simonyan, O. Vinyals, A. Graves, N. Kalchbrenner, A. Senior, and K. Kavukcuoglu. WaveNet: A generative model for raw audio. arXiv preprint arXiv:1609.03499, 2016.

[13] van den Oord, Aaron, Kalchbrenner, Nal, Vinyals, Oriol, Espeholt, Lasse, Graves, Alex, and Kavukcuoglu, Koray. Conditional image generation with PixelCNN decoders. CoRR, abs/1606.05328, 2016

[14] https://github.com/buriburisuri/speech-to-text-wavenet

[15] Tom Le Paine, Pooya Khorrami, Shiyu Chang, Yang Zhang, Prajit Ramachandran, Mark A. Hasegawa-Johnson, and Thomas S. Huang. Fast wavenet generation algorithm. CoRR, abs/1611.09482, 2016.

[16] https://devblogs.nvidia.com/nv-wavenet-gpu-speech-synthesis/

[17] A. v. d. Oord, Y. Li, I. Babuschkin, K. Simonyan, O. Vinyals, K. Kavukcuoglu, G. v. d. Driessche, E. Lockhart, L. C. Cobo, F. Stimberg, et al. Parallel WaveNet: Fast high-fidelity speech synthesis. In ICML, 2018.

[18] Diederik P Kingma, Tim Salimans, and Max Welling. Improving variational inference with inverse autoregressive flow. arXiv preprint arXiv:1606.04934, 2016.

猜你喜欢

转载自blog.csdn.net/mudongcd0419/article/details/82902288