Symbolic gestures are the hand postures with some conventionalized meanings. They are static gestures that one can perform in\na very complex environment containing variations in rotation and scale without using voice. The gestures may be produced in\ndifferent illumination conditions or occluding background scenarios. Any hand gesture recognition system should find enough\ndiscriminative features, such as hand-finger contextual information. However, in existing approaches, depth information of hand\nfingers that represents finger shapes is utilized in limited capacity to extract discriminative features of fingers. Nevertheless, if\nwe consider finger bending information (i.e., a finger that overlaps palm), extracted from depth map, and use them as local\nfeatures, static gestures varying ever so slightly can become distinguishable. Our work here corroborated this idea and we have\ngenerated depth silhouettes with variation in contrast to achievemore discriminative keypoints. This approach, in turn, improved\nthe recognition accuracy up to 96.84%. We have applied Scale-Invariant Feature Transform (SIFT) algorithm which takes the\ngenerated depth silhouettes as input and produces robust feature descriptors as output. These features (after converting into unified\ndimensional feature vectors) are fed into a multiclass Support VectorMachine (SVM) classifier to measure the accuracy. We have\ntested our results with a standard dataset containing 10 symbolic gesture representing 10 numeric symbols (0-9). After that we have\nverified and compared our results among depth images, binary images, and images consisting of the hand-finger edge information\ngenerated from the same dataset. Our results show higher accuracy while applying SIFT features on depth images. Recognizing\nnumeric symbols accuratelyperformed through hand gestures has a huge impact on differentHuman-Computer Interaction (HCI)\napplications including augmented reality, virtual reality, and other fields.
Loading....