The most significant barrier to success in human activity recognition is extracting and\nselecting the right features. In traditional methods, the features are chosen by humans, which requires\nthe user to have expert knowledge or to do a large amount of empirical study. Newly developed deep\nlearning technology can automatically extract and select features. Among the various deep learning\nmethods, convolutional neural networks (CNNs) have the advantages of local dependency and scale\ninvariance and are suitable for temporal data such as accelerometer (ACC) signals. In this paper,\nwe propose an efficient human activity recognition method, namely Iss2Image (Inertial sensor signal\nto Image), a novel encoding technique for transforming an inertial sensor signal into an image with\nminimum distortion and a CNN model for image-based activity classification. Iss2Image converts\nreal number values from the X, Y, and Z axes into three color channels to precisely infer correlations\namong successive sensor signal values in three different dimensions. We experimentally evaluated\nour method using several well-known datasets and our own dataset collected from a smartphone\nand smartwatch. The proposed method shows higher accuracy than other state-of-the-art approaches\non the tested datasets.
Loading....