abstract
- This study presents a proposal of Learnable Gramian Angular Fields (LGAFs), for Brain-Computer Interface (BCI) performance in visual-motor intention classification. We introduce LGAFs with a convolutional neural network (CNN), achieving 93.06% accuracy across six hand movements. The architecture transforms time-series EEG data into optimized two-dimensional representations while utilizing CNNs' efficiency for real-time applications. Through experimentation on four subjects, we validate the model's capability to maintain consistent classification accuracy across movement types. The results demonstrate the LGAF-CNN combination's high performance and potential computational efficiency, establishing its potential for practical BCI applications. Through ablation studies and performance analysis, the architecture achieved comparable accuracy with only three optimally selected channels. © 2025 IEEE.