This paper presents a comprehensive investigation about the use of electroencephalography (EEG) signals for classifying music stimuli through an artificial neural network (ANN). Employing the 16-channel OpenBCI CytonDaisy sensor, EEG data were gathered from participants while they listened to a variety of music tracks. This study examines the impact of varying time window lengths on classification accuracy, evaluates the neural network’s performance with different time- and frequency-domain features, analyzes the influence of diverse music on brain activity patterns, and reveals how songs of different styles affect various subjects. For the five subjects involved in the study, the recognition rate of the model fluctuated between 61% and 96%. The findings indicate that longer time windows, particularly 30 s, result in the highest classification accuracy. Despite the relatively high recognition rate, this study also highlights the issue of intra-individual variability. A substantial decline in performance can be observed when testing the model on data collected from the same person on a different day, underscoring the challenges posed by inter-session variability.
Loading....