Severe paralysis resulting from medical conditions like neurodegenerative diseases and stroke can lead to speech loss. Recent advances in brain-computer interfaces demonstrate the possibility to restore speech by decoding articulatory movements from brain activity for participants who cannot speak intelligibly [1]. However, for individuals with vocal-tract paralysis, it is difficult to learn a mapping of brain data to articulatory movements without transferring from models trained on healthy individuals. Here, we propose a novel framework for 1) extracting articulatory features of spoken words shared across healthy speakers and 2) mapping them to the brain activity of individuals for whom we do not have articulatory movement data. Our findings show promising results for developing word decoding models for individuals with vocal-tract paralysis using group-level articulatory features derived from healthy speakers.
Ruoling Wu, Julia. Berezutskaya, Elena. C. Offenberg, Nick Ramsey
Browse the vacancies page
Radboud University
DBI2 Office
Heyendaalseweg 135
6525 AJ Nijmegen