Recognizing human actions in videos is an active topic with broad commercial potentials. Most of the existing action recognition\nmethods are supposed to have the same camera view during both training and testing. And thus performances of these single-view\napproaches may be severely influenced by the camera movement and variation of viewpoints. In this paper, we address the above\nproblem by utilizing videos simultaneously recorded from multiple views. To this end, we propose a learning framework based\non multitask random forest to exploit a discriminative mid-level representation for videos from multiple cameras. In the first step,\nsubvolumes of continuous human-centered figures are extracted from original videos. In the next step, spatiotemporal cuboids\nsampled from these subvolumes are characterized by multiple low-level descriptors. Then a set of multitask random forests are\nbuilt upon multiview cuboids sampled at adjacent positions and construct an integrated mid-level representation for multiview\nsubvolumes of one action. Finally, a random forest classifier is employed to predict the action category in terms of the learned\nrepresentation. Experiments conducted on themultiview IXMAS action dataset illustrate that the proposed method can effectively\nrecognize human actions depicted in multiview videos.
Loading....