Auteur : Zhong Chao, équipe Ali Group Taobao
[01] https://web.stanford.edu/~jurafsky/slp3/3.pdf
[02] https://ai.googleblog.com/2017/08/transformer-novel-neural-network.html
[03] "Traitement du langage naturel : une méthode basée sur un modèle de pré-formation" Che Wanxiang et al.
[04] https://cs.stanford.edu/people/karpathy/convnetjs/
[05] https://arxiv.org/abs/1706.03762
[06] https://arxiv.org/abs/1512.03385
[07] https://github.com/Kyubyong/transformer/
[08] http://jalammar.github.io/illustrated-transformer/
[09] https://towardsdatascience.com/this-is-how-to-train-better-transformer-models-d54191299978
[10] "Traitement du langage naturel en pratique : application du modèle de pré-formation et sa commercialisation" par Anku A. Patel et al.
[11] https://lilianweng.github.io/posts/2018-06-24-attention/
[12] https://github.com/lilianweng/transformer-tensorflow/
[13] "Prédiction de séquences spatio-temporelles de l'état du trafic routier à court terme basée sur l'apprentissage en profondeur" Cui Jianxun
[14] https://www.zhihu.com/question/325839123
[15] https://luweikxy.gitbook.io/machine-learning-notes/self-attention-and-transformer
[16] "Python Deep Learning (2e édition)" par François Cholet
[17] https://en.wikipedia.org/wiki/Attention_(machine_learning)
[18] https://zhuanlan.zhihu.com/p/410776234
[19] https://www.tensorflow.org/tensorboard/get_started
[20] https://paperswithcode.com/method/multi-head-attention
[21] https://zhuanlan.zhihu.com/p/48508221
[22] https://www.joshbelanich.com/self-attention-layer/
[23] https://learning.rasa.com/transformers/kvq/
[24] http://deeplearning.stanford.edu/tutorial/supervised/ConvolutionalNeuralNetwork/
[25] https://zhuanlan.zhihu.com/p/352898810
[26] https://towardsdatascience.com/beautifully-illustrated-nlp-models-from-rnn-to-transformer-80d69faf2109
[27] https://medium.com/analytics-vidhya/understanding-qkv-in-transformer-self-attention-9a5eddaa5960