Files in this item



application/pdfECE499-Fa2015-choudhary.pdf (1MB)Restricted to U of Illinois
(no description provided)PDF


Title:A Comparison of Speech, Touch, and SSVEP-Based BCI Inputs for Head-Mounted Displays
Author(s):Choudhary, Ojasvi
Contributor(s):Bretl, Timothy
Subject(s):head-mounted display
brain-computer interface
steady-state visual evoked potential
augmented reality
Abstract:We evaluated steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCI) as an input mechanism for head-mounted displays (HMDs). Our evaluation compared the performance of three input mechanisms on a Google Glass (speech recognition, touch gestures, and SSVEP) with SSVEP-based BCI on a desktop monitor. The results of this comparison study show that the SSVEP-based BCI on a desktop monitor can classify input commands with greater than 98% accuracy in an average of 1.23 seconds. Neither speech recognition nor touch gestures were found to be significantly faster. While SSVEP-based BCI on a Google Glass was significantly slower than SSVEP-based BCI on a desktop and touch gestures, it still achieved greater than 94% accuracy after 2.2 seconds. Our results show that SSVEP-based BCIs may provide an attractive input mechanism for HMDs and, in particular, suggest that there may be conditions under which SSVEP-based BCIs are comparable in performance to existing HMD input mechanisms.
Issue Date:2015-12
Date Available in IDEALS:2016-02-18

This item appears in the following Collection(s)

Item Statistics