Transformer模型的改进-BERT

References:

[1] Karim Ahmed, Nitish Shirish Keskar, and Richard Socher. Weighted transformer network for machine. translation. arXiv preprint arXiv:1711.02132, 2017.

[2] Shaw, P., Uszkoreit, J., Vaswani, A. Self-attention with relative position representations. arXiv preprint arXiv:1803.02155 (2018)

[3] http://www.sohu.com/a/234238473_129720

[4] https://baijiahao.baidu.com/s?id=1601234081544356769&wfr=spider&for=pc

扫描二维码关注公众号,回复: 3974650 查看本文章

[5] Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding with unsupervised learning. Technical report, OpenAI.

[6] Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina ToutanovaBERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. arXiv:1810.04805

[7] Matthew Peters, Waleed Ammar, Chandra Bhagavatula, and Russell Power. 2017. Semi-supervised sequence tagging with bidirectional language models. In ACL.

猜你喜欢

转载自blog.csdn.net/mudongcd0419/article/details/83821168