Current Issue : October - December Volume : 2020 Issue Number : 4 Articles : 5 Articles
In recent years, streaming music platforms have become very popular mainly due to the huge number of songs these systems make\navailable to users. This enormous availability means that recommendation mechanisms that help users to select the music they like\nneed to be incorporated. However, developing reliable recommender systems in the music field involves dealing with many\nproblems, some of which are generic and widely studied in the literature while others are specific to this application domain and\nare therefore less well-known. This work is focused on two important issues that have not received much attention: managing\ngray-sheep users and obtaining implicit ratings. The first one is usually addressed by resorting to content information that is often\ndifficult to obtain. The other drawback is related to the sparsity problem that arises when there are obstacles to gather explicit\nratings. In this work, the referred shortcomings are addressed by means of a recommendation approach based on the usersâ??\nstreaming sessions. The method is aimed at managing the well-known power-law probability distribution representing the\nlistening behavior of users. This proposal improves the recommendation reliability of collaborative filtering methods while\nreducing the complexity of the procedures used so far to deal with the gray-sheep problem....
Sign language encompasses the movement of the arms and hands as a means of communication for people with hearing disabilities.\nAn automated sign recognition system requires two main courses of action: the detection of particular features and the\ncategorization of particular input data. In the past, many approaches for classifying and detecting sign languages have been put\nforward for improving system performance. However, the recent progress in the computer vision field has geared us towards the\nfurther exploration of hand signs/gesturesâ?? recognition with the aid of deep neural networks. The Arabic sign language has\nwitnessed unprecedented research activities to recognize hand signs and gestures using the deep learning model. A vision-based\nsystem by applying CNN for the recognition of Arabic hand sign-based letters and translating them into Arabic speech is\nproposed in this paper. The proposed system will automatically detect hand sign letters and speaks out the result with the\nArabic language with a deep learning model. This system gives 90% accuracy to recognize the Arabic hand sign-based letters\nwhich assures it as a highly dependable system. The accuracy can be further improved by using more advanced hand gestures\nrecognizing devices such as Leap Motion or Xbox Kinect. After recognizing the Arabic hand sign-based letters, the outcome will\nbe fed to the text into the speech engine which produces the audio of the Arabic language as an output....
Mapping and masking are two important speech enhancement methods based on deep learning that aim to recover the original\nclean speech from corrupted speech. In practice, too large recovery errors severely restrict the improvement in speech quality. In\nour preliminary experiment, we demonstrated that mapping and masking methods had different conversion mechanisms and\nthus assumed that their recovery errors are highly likely to be complementary. Also, the complementarity was validated accordingly.\nBased on the principle of error minimization, we propose the fusion between mapping and masking for speech\ndereverberation. Specifically, we take the weighted mean of the amplitudes recovered by the two methods as the estimated\namplitude of the fusion method. Experiments verify that the recovery error of the fusion method is further controlled. Compared\nwith the existing geometric mean method, the weighted mean method we proposed has achieved better results. Speech dereverberation\nexperiments manifest that the weighted mean method improves PESQ and SNR by 5.8% and 25.0%, respectively,\ncompared with the traditional masking method....
In this paper, we propose a density estimation system of user density at the closed space using high frequencies of speaker and\nmicrophone of smart device. High frequencies are sent to the closed space by the server speaker of the density estimation system,\nand smart devices located at the space detect the high frequencies via the microphone of each device. The smart devices detecting\nthe high frequencies send data to the server system, and the system calculates data from the smart devices. To evaluate performance\nof the proposed system, we did some experiments with the density estimation system and 20 smart devices. According to\nthe test results, the proposed system showed 96.5% accuracy, and we confirm that the system is very useful for density estimation.\nTherefore, this system can precisely estimate user density at the closed space, and it could be useful technology for the density\nestimation of space users and measurement of space using state at indoor space....
In recent years there has been an increasing percentage of cochlear implant (CI) users\nwho have usable residual hearing in the contralateral, nonimplanted ear, typically aided by acoustic\namplification. This raises the issue of the extent to which the signal presented through the\ncochlear implant may influence how listeners process information in the acoustically stimulated ear.\nThis multicenter retrospective study examined pre- to postoperative changes in speech perception\nin the nonimplanted ear, the implanted ear, and both together. Results in the latter two conditions\nshowed the expected increases, but speech perception in the nonimplanted ear showed a modest\nyet meaningful decrease that could not be completely explained by changes in unaided thresholds,\nhearing aid malfunction, or several other demographic variables. Decreases in speech perception in\nthe nonimplanted ear were more likely in individuals who had better levels of speech perception\nin the implanted ear, and in those who had better speech perception in the implanted than in the\nnonimplanted ear. This raises the possibility that, in some cases, bimodal listeners may rely on the\nhigher quality signal provided by the implant and may disregard or even neglect the input provided\nby the nonimplanted ear....
Loading....