Current Issue : April - June Volume : 2019 Issue Number : 2 Articles : 5 Articles
--NA--...
Recent evidence suggests the existence of shared neural resources for rhythm processing\nin language and music. Such overlaps could be the basis of the facilitating effect of regular musical\nrhythm on spoken word processing previously reported for typical children and adults, as well as\nadults with Parkinsonâ??s disease and children with developmental language disorders. The present\nstudy builds upon these previous findings by examining whether non-linguistic rhythmic priming also\ninfluences visual word processing, and the extent to which such cross-modal priming effect of rhythm\nis related to individual differences in musical aptitude and reading skills. An electroencephalogram\n(EEG) was recorded while participants listened to a rhythmic tone prime, followed by a visual\ntarget word with a stress pattern that either matched or mismatched the rhythmic structure of the\nauditory prime. Participants were also administered standardized assessments of musical aptitude\nand reading achievement. Event-related potentials (ERPs) elicited by target words with a mismatching\nstress pattern showed an increased fronto-central negativity. Additionally, the size of the negative\neffect correlated with individual differences in musical rhythm aptitude and reading comprehension\nskills. Results support the existence of shared neurocognitive resources for linguistic and musical\nrhythm processing, and have important implications for the use of rhythm-based activities for\nreading interventions....
The competition of speech recognition technology related to smartphones is now getting\ninto full swing with the widespread internet of thing (IoT) devices. For robust speech recognition, it is\nnecessary to detect speech signals in various acoustic environments. Speech/music classification that\nfacilitates optimized signal processing from classification results has been extensively adapted as an\nessential part of various electronics applications, such as multi-rate audio codecs, automatic speech\nrecognition, and multimedia document indexing. In this paper, we propose a new technique to\nimprove robustness of a speech/music classifier for an enhanced voice service (EVS) codec adopted\nas a voice-over-LTE (VoLTE) speech codec using long short-term memory (LSTM). For effective\nspeech/music classification, feature vectors implemented with the LSTM are chosen from the\nfeatures of the EVS. To overcome the diversity of music data, a large scale of data is used for\nlearning. Experiments show that LSTM-based speech/music classification provides better results\nthan the conventional EVS speech/music classification algorithm in various conditions and types of\nspeech/music data, especially at lower signal-to-noise ratio (SNR) than conventional EVS algorithm....
The performance of many speech processing algorithms depends on modeling speech\nsignals using appropriate probability distributions. Various distributions such as the Gamma\ndistribution, Gaussian distribution, Generalized Gaussian distribution, Laplace distribution as\nwell as multivariate Gaussian and Laplace distributions have been proposed in the literature to\nmodel different segment lengths of speech, typically below 200 ms in different domains. In this\npaper, we attempted to fit Laplace and Gaussian distributions to obtain a statistical model of speech\nshort-time Fourier transform coefficients with high spectral resolution (segment length >500 ms)\nand low spectral resolution (segment length <10 ms). Distribution fitting of Laplace and Gaussian\ndistributions was performed using maximum-likelihood estimation. It was found that speech\nshort-time Fourier transform coefficients with high spectral resolution can be modeled using Laplace\ndistribution. For low spectral resolution, neither the Laplace nor Gaussian distribution provided\na good fit. Spectral domain modeling of speech with different depths of spectral resolution is useful\nin understanding the perceptual stability of hearing which is necessary for the design of digital\nhearing aids....
One of the most significant enclosures in worship spaces is that of the choir. Generally,\nfrom a historical point of view, the choir is a semi-enclosed and privileged area reserved for the\nclergy, whose position and configuration gives it a private character. Regarding the generation and\ntransformation of ecclesial interior spaces, the choir commands a role of the first magnitude. Its shape\nand location produce, on occasions, major modifications that significantly affect the acoustics of these\nindoor spaces. In the case of Spanish cathedrals, whose design responds to the so-called â??Spanish\ntypeâ?, the central position of the choir, enclosed by high stonework walls on three of its sides and\nwith numerous wooden stalls inside, breaks up the space in the main nave, thereby generating other\nnew spaces, such as the trascoro. The aim of this work was to analyse the acoustic evolution of the\nchoir as one of the main elements that configure the sound space of Spanish cathedrals. By means of\nin situ measurements and simulation models, the main acoustic parameters were evaluated, both\nin their current state and in their original configurations that have since disappeared. This analysis\nenabled the various acoustic conditions existing between the choir itself and the area of the faithful\nto be verified, and the significant improvement of the acoustic quality in the choir space to become\napparent. The effect on the acoustic parameters is highly significant, with slight differences in the\nchoir, where the values are appropriate for Gregorian chants, and suitable intelligibility of sung text.\nHigh values are also obtained in the area of the faithful, which lacked specific acoustic requirements\nat the time of construction....
Loading....