Go to main content
Formats
Format
BibTeX
MARCXML
TextMARC
MARC
DublinCore
EndNote
NLM
RefWorks
RIS

Files

Abstract

The brain-computer interface has emerged as a promising field of research with the capacity to significantly transform multiple technological sectors. The imagining of limb movement known as motor imagery (MI) serves as an auspicious paradigm for the interface control of such a device. Masked by volume conduction, electroencephalography (EEG) MI signals exhibit a low signal-to-noise ratio. Additionally, the underlying spatio-temporal pattern is manifested spectrally by the suppression and amplification of the respective $\mu$ and $\beta$ frequency bands during sensorimotor cortical activity. These challenges necessitate further advancements in pattern recognition and feature extraction by way of machine learning. A novel bottom-up approach for the artifact removal and single-channel classification of EEG MI signals known as supervised projective learning for EEG analysis (SPLEEGA) is introduced in this thesis. The underlying engine of SPLEEGA is supervised projective learning with orthogonal completeness (SPLOC). For each electrode, a SPLEEGA eigenchannel provides a characteristic vector space that discriminates MI from resting state signals in both the $\mu$ and $\beta$ frequency bands while simultaneously performing temporal alignment as an emergent property of the developed orthonormal basis.The 52-subject GigaDB MI dataset from Gwangju Institute of Technology was utilized in this work. At a sparse-sensor configuration, complete separation of MI signals from resting state signals is achieved for 73\% of subjects. With a contralateral montage, this is increased to 100\% of subjects revealing at least two discriminatory eigenchannels. Utilizing only 7\% of data for training during an automatic channel selection procedure, SPLEEGA attains comparable MI classification accuracy to the state-of-the-art using only a single electrode and frequency band. Furthermore, the utility for real-time applications is encouraging due to a rapid classification which takes < 100 ms after an initial calibration is executed.

Details

PDF

Statistics

from
to
Export
Download Full History