Current Issue : April - June Volume : 2021 Issue Number : 2 Articles : 5 Articles
To improve the accuracy of the multiple channel integration quality (MCIQ) evaluation, this paper proposes a comprehensive evaluation method using the nonlinear autoregressive exogenous model (NARX) and constructs an index system. First, the entropy method is used to determine the objective weight of each indicator. The indicators used in this paper are process consistency, information consistency, emotional value, procedural value, service structure transparency, online result value, business relevance, and online purchase intention. Second, an improved gray relational analysis (GRA) algorithm is used to obtain the comprehensive gray relational degree between the above eight indicators’ standard samples and the tested samples. Then, this study uses the dataset preprocessed with the GRA algorithm for training the NARX model. Then, this study uses the trained model to evaluate the quality of multiple channel integration comprehensively. Next, this study uses standardized methods to quantify the evaluation results to provide new ideas and theoretical guidance for teaching traditional retailers to use the advantages of multiple channels to expand their online business. This paper uses 50,000 consecutive samples of a product for 3 months as a dataset in the experimental part. Through the GRA method and the NARX model, the comprehensive gray relational degree between the test sample and the ideal sample is obtained, and the results are quantified. Experiments show that, compared with the GRA method, this paper’s method has a higher degree of fit between the output value and the target value....
Due to the improvement of the quality of industrial products, zero-failure data often occurs during the reliability life test or in the service environment, and such problems cannot be handled using traditional reliability estimation methods. Regarding the processing and analysis of zero-failure data, the confidence limit assessment methods were proposed by some researchers. Based on the existing research, a confidence limit method set (CLMS) is established in the Weibull distribution for reliability estimation of zero-failure data. The method set includes the unilateral confidence limit method and optimal confidence limit method, so that almost all existing grouping types of zero-failure data can be quickly evaluated, and multiple methods can be used in parallel to deal with the same problem. The effectiveness and high efficiency of the CLMS combined with numerical simulation examples have been verified, and the possibility of analyzing multiple groups of zero-failure data with a confidence limit method suitable for processing single group of zero-failure data is expanded. Finally, the actual effect of the method set is verified by the single group of zero-failure data of rolling bearings and the multiple groups of zero-failure data of torque motors. The results of the example evaluation show that the CLMS has obvious advantages in practical engineering applications....
The stiffness degradation of the servo turret will inevitably lead to accuracy reduction of the cutter head and the tool change. Considering the degradation process of servo turret with stiffness, by introducing the stiffness cumulative damage theory into the vibration differential equation, combined with stochastic finite element method and reliability theory, the mathematical model of the reliability and reliability sensitivity for vibration transmission path system with random parameters was established. Taking a typical power servo turret for example, the reliability and the reliability sensitivity to each random parameter at the mean value with the excitation frequency and time were obtained. Theresults showed that the shift of the reliability and reliability sensitivity to random parameters with time was caused by the stiffness degradation, the peak value of reliability sensitivity fluctuated with time, and the peak value in the frequency domain at the initial time was not necessarily the maximum value in the time domain. The accuracy of the proposed method was further proved by the Monte Carlo method. Optimizing sensitive parameters could enhance the system stability and effectively prevent the resonance failure caused by the change of the resonance region....
In the nervous system, information is conveyed by sequence of action potentials, called spikes-trains. As MacKay and McCulloch suggested, spike-trains can be represented as bits sequences coming from Information Sources (IS). Previously, we studied relations between spikes’ Information Transmission Rates (ITR) and their correlations, and frequencies. Now, I concentrate on the problem of how spikes fluctuations affect ITR. The IS are typically modeled as stationary stochastic processes, which I consider here as two-state Markov processes. As a spike-trains’ fluctuation measure, I assume the standard deviation σ, which measures the average fluctuation of spikes around the average spike frequency. I found that the character of ITR and signal fluctuations relation strongly depends on the parameter s being a sum of transitions probabilities from a no spike state to spike state. The estimate of the Information Transmission Rate was found by expressions depending on the values of signal fluctuations and parameter s. It turned out that for smaller s < 1, the quotient ITR σ has a maximum and can tend to zero depending on transition probabilities, while for 1 < s, the ITR σ is separated from 0. Additionally, it was also shown that ITR quotient by variance behaves in a completely different way. Similar behavior was observed when classical Shannon entropy terms in the Markov entropy formula are replaced by their approximation with polynomials. My results suggest that in a noisier environment (1 < s), to get appropriate reliability and efficiency of transmission, IS with higher tendency of transition from the no spike to spike state should be applied. Such selection of appropriate parameters plays an important role in designing learning mechanisms to obtain networks with higher performance....
During the last years, water quality has been threatened by various pollutants. Therefore, modeling and predicting water quality have become very important in controlling water pollution. In this work, advanced artificial intelligence (AI) algorithms are developed to predict water quality index (WQI) and water quality classification (WQC). For the WQI prediction, artificial neural network models, namely nonlinear autoregressive neural network (NARNET) and long short-term memory (LSTM) deep learning algorithm, have been developed. In addition, three machine learning algorithms, namely, support vector machine (SVM), K-nearest neighbor (K-NN), and Naive Bayes, have been used for the WQC forecasting. The used dataset has 7 significant parameters, and the developed models were evaluated based on some statistical parameters. The results revealed that the proposed models can accurately predict WQI and classify the water quality according to superior robustness. Prediction results demonstrated that the NARNET model performed slightly better than the LSTM for the prediction of the WQI values and the SVM algorithm has achieved the highest accuracy (97.01%) for the WQC prediction. Furthermore, the NARNET and LSTM models have achieved similar accuracy for the testing phase with a slight difference in the regression coefficient (RNARNET = 96:17% and RLSTM = 94:21%). This kind of promising research can contribute significantly to water management....
Loading....