Motor-imagery tasks are a popular input method for controlling brain-computer interfaces (BCIs), partially due to their similarities\nto naturally produced motor signals. The use of functional near-infrared spectroscopy (fNIRS) in BCIs is still emerging and has\nshown potential as a supplement or replacement for electroencephalography. However, studies often use only two or three motorimagery\ntasks, limiting the number of available commands. In thiswork,we present the results of the first four-classmotor-imagerybased\nonline fNIRS-BCI for robot control. Thirteen participants utilized upper- and lower-limb motor-imagery tasks (left hand,\nright hand, left foot, and right foot) that were mapped to four high-level commands (turn left, turn right, move forward, and move\nbackward) to control the navigation of a simulated or real robot. A significant improvement in classification accuracy was found\nbetween the virtual-robot-based BCI (control of a virtual robot) and the physical-robot BCI (control of the DARwIn-OP humanoid\nrobot). Differences were also found in the oxygenated hemoglobin activation patterns of the four tasks between the first and second\nBCI. These results corroborate previous findings that motor imagery can be improved with feedback and imply that a four-class\nmotor-imagery-based fNIRS-BCI could be feasible with sufficient subject training.
Loading....