To improve the human-computer interaction (HCI) to be as good as human-human interaction, building an efficient approach\nfor human emotion recognition is required.These emotions could be fused from several modalities such as facial expression, hand\ngesture, acoustic data, and biophysiological data. In this paper, we address the frame-based perception of the universal human facial\nexpressions (happiness, surprise, anger, disgust, fear, and sadness), with the help of several geometrical features. Unlike many other\ngeometry-based approaches, the frame-based method does not rely on prior knowledge of a person-specific neutral expression;\nthis knowledge is gained through human intervention and not available in real scenarios. Additionally, we provide a method to\ninvestigate the performance of the geometry-based approaches under various facial point localization errors. From an evaluation\non two public benchmark datasets, we have found that using eight facial points, we can achieve the state-of-the-art recognition\nrate. However, this state-of-the-art geometry-based approach exploits features derived from 68 facial points and requires prior\nknowledge of the person-specific neutral expression.Theexpression recognition rate using geometrical features is adversely affected\nby the errors in the facial point localization, especially for the expressions with subtle facial deformations.
Loading....