Identifying input features for development of real-time translation of neural signals to text

Janaki Sheth, Ariel Tankus, Michelle Tran, Lindy Comstock, Itzhak Fried, William Speier

Research output: Contribution to journalConference articlepeer-review

2 Scopus citations

Abstract

One of the main goals in Brain-Computer Interface (BCI) research is to help patients with faltering communication abilities due to neurodegenrative diseases produce text or speech output using their neural recordings. However, practical implementation of such a system has proven difficult due to limitations in the speed, accuracy, and training time of existing interfaces. In this paper, we contribute to this endeavour by isolating appropriate input features from speech-producing neural signals that will feed into a machine learning classifier to identify target phonemes. Analysing data from six subjects, we discern frequency bands that encapsulate differential information regarding production of vowels and consonants broadly, and more specifically nasals and semivowels. Subsequent spatial localization analysis reveals the underlying cortical regions responsible for different phoneme categories. Anatomical locations along with their respective frequency bands act as prospective feature sets for machine learning classifiers. We demonstrate this classification ability in a preliminary language reconstruction task and show an average word classification accuracy of 30.6% (p<0.001).

Funding

FundersFunder number
NVIDIA

    Keywords

    • Brain-computer interface
    • Neural signal frequency bands
    • Speech production

    Fingerprint

    Dive into the research topics of 'Identifying input features for development of real-time translation of neural signals to text'. Together they form a unique fingerprint.

    Cite this