Brain Computer Interface for Speech Synthesis Based on Multilayer Differential Neural Networks Academic Article in Scopus uri icon

abstract

  • © 2021 Taylor & Francis Group, LLC.This manuscript proposes the design of a speech synthesis algorithm based on measured electroencephalographic (EEG) signals previously classified by a class of neural network with continuous dynamics. A novel multilayer differential neural network (MDNN) classifies a database with the EEG studies of 20 volunteers. The database contains information described by input-output pairs corresponding to EEG signals and a corresponding word imagined by the volunteer. The suggested MDNN estimates the unknown relationship between the information instances and suggests the most-likely word that the user wants mentioning. The proposed MDNN satisfactory classifies over 95% a set of words obtained from the suggested EEG study in which, the users have to watch four different geometric figures on a screen.

publication date

  • February 1, 2022