Record the code that is reproduced during your own learning process
Original paper (published in August 2022)
Physics-Informed Attention Temporal Convolutional Network for EEG-Based Motor Imagery Classification
Journal: (impact factor 11.6+)
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS
Summary
英文:The brain-computer interface (BCI) is a cutting-edge technology that has the potential to change the world. Electroencephalogram (EEG) motor imagery (MI) signal has been used extensively in many BCI applications to assist disabled people, control devices or environments, and even augment human capabilities. However, the limited performance of brain signal decoding is restricting the broad growth of the BCI industry. In this article, we propose an attention-based temporal convolutional network (ATCNet) for EEG-based motor imagery classifification. The ATCNet model utilizes multiple techniques to boost the performance of MI classifification with a relatively small number of parameters. ATCNet employs scientifific machine learning to design a domain-specifific deep learning model with interpretable and explainable features, multihead self-attention to highlight the most valuable features in MI-EEG data, temporal convolutional network to extract high-level temporal features, and convolutional-based sliding window to augment the MI-EEG data effificiently. The proposed model outperforms the current state-of-the-art techniques in the BCI Competition IV-2a dataset with an accuracy of 85.38% and 70.97% for the subject-dependent and subject-independent modes, respectively.
Translation: Brain-computer interface (BCI) is a cutting-edge technology that has the potential to change the world. Electroencephalography (EEG) motion image (MI) signals have been widely used in many BCI applications to assist disabled people in controlling devices or environments, and even augment human capabilities. However, the limited performance of brain signal decoding has limited the extensive development of the BCI industry. In this paper, we propose an attention-based temporal convolutional network (ATCNet) for EEG-based moving image classification. This ATCNet model utilizes multiple techniques to improve the performance of MI classification with a relatively small number of parameters. ATCNet adopts scientific machine learning to design a domain-specific deep learning model with interpretable and accountable features, multi-head self-attention to highlight the most valuable features in MI-EEG data, temporal convolutional network to extract high-level temporal features, and convolution-based sliding features. Temporal convolutional networks extract high-level temporal features, and convolution-based sliding windows effectively enhance MI-EEG data. The proposed model outperforms the current state-of-the-art in BCI. On the IV-2a dataset, the proposed model outperforms the current state-of-the-art with an accuracy of 85.38% and 70.97%.
Technology Roadmap
lab environment
CUDA = 11.0
hidden = 8.0
python = 3.7
tensorflow =2.4.0
Experimental results
Subject: 1 best_run: 1 acc: 0.9149 kappa: 0.8866 avg_acc: 0.8629 +- 0.0124 avg_kappa: 0.8171 +- 0.0166
Subject: 2 best_run: 1 acc: 0.8090 kappa: 0.7454 avg_acc: 0.6448 +- 0.0326 avg_kappa: 0.5264 +- 0.0435
Subject: 3 best_run: 1 acc: 0.9861 kappa: 0.9815 avg_acc: 0.9410 +- 0.0143 avg_kappa: 0.9213 +- 0.0191
Subject: 4 best_run: 1 acc: 0.8264 kappa: 0.7685 avg_acc: 0.7677 +- 0.0258 avg_kappa: 0.6903 +- 0.0344
Subject: 5 best_run: 1 acc: 0.8681 kappa: 0.8241 avg_acc: 0.8017 +- 0.0142 avg_kappa: 0.7356 +- 0.0189
Subject: 6 best_run: 5 acc: 0.8194 kappa: 0.7593 avg_acc: 0.7118 +- 0.0170 avg_kappa: 0.6157 +- 0.0227
Subject: 7 best_run: 10 acc: 0.7257 kappa: 0.6343 avg_acc: 0.8986 +- 0.0296 avg_kappa: 0.8648 +- 0.0394
Subject: 8 best_run: 7 acc: 0.9271 kappa: 0.9028 avg_acc: 0.8722 +- 0.0133 avg_kappa: 0.8296 +- 0.0178
Subject: 9 best_run: 1 acc: 0.9080 kappa: 0.8773 avg_acc: 0.8778 +- 0.0186 avg_kappa: 0.8370 +- 0.0247Average of 9 subjects - best runs:
Accuracy = 0.8650 Kappa = 0.8200Average of 9 subjects x 10 runs (average of 90 experiments):
Accuracy = 0.8198 Kappa = 0.7598
Finish