Publication Details
Overview
 
 
Arnau Dillen
 

Thesis

Abstract 

Since the inception of digital computers, providing intuitive ways to interact with devices has been a pivotal research area. A well-designed user interface allows users to achieve their objectives effectively and efficiently while ensuring a pleasant experience. These attributes define usability, which is significantly influenced by the deployment environment and the target user demographic. Ensuring a diverse range of interaction modalities is essential for inclusive device usability across various conditions.Brain-computer interfaces (BCIs) have emerged as revolutionary technologies enabling interaction with devices through neural signals. BCIs hold promise for enhancing device interaction for individuals with paralysis and facilitating applications that support them in their daily lives. This could greatly enhance the patient{\textquoteright}s autonomy and subsequently improve their quality of life.This research project centers on developing a proof-of-concept software application using off-theshelf hardware to control a robotic arm with BCI, aimed at assisting individuals with paralysis in their daily activities. The BCI control system decodes user intentions from electroencephalogram (EEG) signals to execute device commands. The core research question addresses the optimal design of a BCI control system for practical applications involving human-robot collaboration.To achieve this, the research established several key scientific objectives. The first objective (RO1) was to develop a real-time motor imagery (MI) decoding strategy that ensures fast decoding, minimal computational cost, and low calibration time. The second objective (RO2) involved designing and implementing a control system to address low MI decoding accuracy in real-time settings while enhancing user experience. Lastly, developing an evaluation procedure (RO3) was crucial to objectively and subjectively quantify system performance and inform design improvements.Chapter 2 presents a literature review identifying critical issues such as the prevalence of offline decoding for performance assessment and the lack of standardized evaluation procedures for BCI prototypes. It also highlights the limitations of using deep learning for MI decoding from EEG data. These results prompted us to focus our efforts on off-the-shelf machine learning methods for EEG decoding.Initial development involved benchmarking various EEG decoding pipelines for lower limb neuroprostheses control, revealing that while customization for each user yielded optimal results, standard common spatial patterns, and linear discriminant analysis pipelines were more practical (Chapter 3). Chapter 4 investigates the possibility of reducing the number of sensors used for MI decoding. This was achieved by acquiring an MI dataset using a 64-channel EEG device and benchmarking the decoding performance when using only a subset of the available sensors. The results demonstrated that reliable MI decoding can be achieved with just eight appropriately placed sensors. This shows the feasibility of using low-density EEG devices with less than 32 electrodes, thereby reducing both the monetary and computational costs of an MI BCI system.Chapter 5 outlines a comprehensive framework for evaluating BCI control systems through quantitative measures, ensuring iterative software improvements and adequate participant training. In Chapter 6, an augmented reality (AR) control system design is described, integrating visual feedback with real-world overlays via a shared control approach. This system uses eye tracking for object selection and computer vision for spatial awareness, with users selecting actions through MI.Chapter 7 details a user study evaluating the developed BCI control system, comparing it to an eyetracking-only control system. While eye tracking outperformed the BCI system, the study affirmed the feasibility of the BCI design for real-world applications with potential enhancements (See Chapter 7).Key findings include:• Only eight well-placed EEG sensors are needed to achieve adequate decoding performance. A reduction from 64 to 8 sensors resulted in a non-significant decrease in decoding accuracy (p=0.18) from 0.67 to 0.65 respectively. This demonstrates the potential to minimize computational and financial costs by using a low number of sensors for MI decoding.• A shared control design informed by real-world contexts simplifies BCI decoding, and AR integration enhances the user interface. With our design, only 2 MI classes suffice to obtain a success rate of 0.83 on evaluation tasks.• Although eye tracking outperforms current BCI systems, BCIs are feasible for real-world use. Especially the efficiency, measured by evaluation task completion time, is significantly higher (p<0.001) when using the eye-tracking-based control system variant.• Consumer-grade EEG devices are viable for EEG acquisition in BCI control systems when using our control system design. All participants who used the commercial EEG device for the user study were able to complete the evaluation tasks. This makes a further reduction in monetaryand computational cost possible beyond what can be achieved with a reduction in the number of employed sensors.Since the current control system uses basic EEG decoding methods, future research should focus on integrating advanced EEG decoding methods such as deep learning, transfer learning, and continual learning. Gamifying the calibration procedure may yield better training data and make the control system more attractive to potential users. Additionally, closer hardware-software integration through embedded decoding and built-in sensors in AR headsets should lead to a consumer-ready BCI control system for diverse applications.

Reference