Current Issue : January-March Volume : 2023 Issue Number : 1 Articles : 5 Articles
In order to improve the effect of English semantic analysis, under the support of natural language processing, this paper analyzes English syntactic analysis and the word sense strategy of the neutral set and solves the parameters through data training, so as to solve the probability distribution of the maximum entropy model of each order. Moreover, by comparing the prediction probability of the model to the judgment mode with the experimental data, it is found that the first-order maximum entropy model (independent model) is quite different from the data. Therefore, when judging data in English semantics, we cannot only consider the influence of second-order correlations but should also consider higher-order correlations. The research results of the simulation experiment show that the English syntactic analysis and the word sense disambiguation strategy of the neutral set proposed in this paper from the perspective of natural language processing are very effective....
With the development of art education and information technology, it is increasingly necessary to use computer technology and multimedia technology to assist teaching in the teaching activities of music subjects nowadays, so as to cultivate students’ independent inquiry ability and drilling ability. The design of an interactive teaching music intelligence system based on artificial intelligence is studied, and a music learning model based on the RBF algorithm is proposed, which helps to enhance students’ inquiry ability and also plays the leading role of teachers. By teaching each other, students become the main subject of teaching and learning, and it stimulates students’ enthusiasm and learning awareness of music learning....
Retrieving music information is indispensable and divided into multiple genres. Music genres can be attributed to set categories, which are the indispensable functions of intelligent music recommendation systems. To improve the effect of music genre classification and model construction, combined with the music genre classification algorithm, this paper combines the multihead attention mechanism to study the music genre classification algorithm model, and it analyzes the key technology of music beamforming. Moreover, this paper has made a detailed description and derivation of the array antenna model, the principle of music beamforming, and the performance evaluation criteria of music adaptive beamforming. In the second half, the nonblind classical LMS algorithm, RLS algorithm, and variable step size LMS algorithm of adaptive beamforming are studied in detail. A music genre classification algorithm model based on the multihead attention mechanism is constructed. It can be seen from the experimental research that the music genre classification algorithm based on the multihead attention mechanism proposed in this paper has obvious advantages compared with the traditional algorithm, and it has a certain role in music genre classification....
In order to solve the problem that the function and performance of traditional systems in English machine translation cannot meet the needs of intelligence, the author proposes an English vocabulary and speech corpus recognition system based on computer image processing. On the basis of designing the overall structure of the system, the hardware structure is designed by designing the server and translator. In the software design, the semantic features of the input English sentences in humancomputer interaction are analyzed by using the enhanced algorithm, the decoding algorithm is designed according to the analysis results, and the English machine translation model is constructed. Experimental results show that when the number of sentences translated by this system is from 100 to 1000, the BLEU indicator keeps rising from 7 to 10. The English vocabulary and speech corpus recognition system based on computer image processing is more efficient....
The study of understanding sentiment and emotion in speech is a challenging task in human multimodal language. However, in certain cases, such as telephone calls, only audio data can be obtained. In this study, we independently evaluated sentiment analysis and emotion recognition from speech using recent self-supervised learning models—specifically, universal speech representations with speaker-aware pre-training models. Three different sizes of universal models were evaluated for three sentiment tasks and an emotion task. The evaluation revealed that the best results were obtained with two classes of sentiment analysis, based on both weighted and unweighted accuracy scores (81% and 73%). This binary classification with unimodal acoustic analysis also performed competitively compared to previous methods which used multimodal fusion. The models failed to make accurate predictionsin an emotion recognition task and in sentiment analysis tasks with higher numbers of classes. The unbalanced property of the datasets may also have contributed to the performance degradations observed in the six-class emotion, three-class sentiment, and seven-class sentiment tasks....
Loading....