Current Issue : April - June Volume : 2016 Issue Number : 2 Articles : 4 Articles
Many problem domains utilize discriminant analysis, for example, classification, prediction, and diagnoses, by applying artificial\nintelligence and machine learning. However, the results are rarely perfect and errors can cause significant losses. Hence, end users\nare best served when they have performance information relevant to their need. Starting with the most basic questions, this study\nconsiders eight summary statistics often seen in the literature and evaluates their end user efficacy. Results lead to proposed criteria\nnecessary for end user efficacious summary statistics. Testing the same eight summary statistics shows that none satisfy all of the\ncriteria. Hence, two criteria-compliant summary statistics are introduced. To show how end users can benefit, measure utility is\ndemonstrated on two problems. A key finding of this study is that researchers can make their test outcomes more relevant to end\nusers with minor changes in their analyses and presentation....
This paper presents the speed control of a separately excited DC motor using Neural Network (NN)\ncontroller in field weakening region. In armature control, speed controller has been used in outer\nloop while current controller in inner loop is used. The function of NN is to predict the field current\nthat realizes the field weakening to drive the motor over rated speed. The parameters of NN\nare optimized by the Social Spider Optimization (SSO) algorithm. The system has been implemented\nusing MATLAB/SIMULINK software. The simulation results show that the proposed method gives a\ngood performance and is feasible to be applied instead of others conventional combined control\nmethods....
Probabilistic neural network has successfully solved all kinds of engineering problems in various fields since it is proposed.\nIn probabilistic neural network, Spread has great influence on its performance, and probabilistic neural network\nwill generate bad prediction results if it is improperly selected. It is difficult to select the optimal manually. In this article,\na variant of probabilistic neural network with self-adaptive strategy, called self-adaptive probabilistic neural network, is\nproposed. In self-adaptive probabilistic neural network, Spread can be self-adaptively adjusted and selected and then the\nbest selected Spread is used to guide the self-adaptive probabilistic neural network train and test. In addition, two simplified\nstrategies are incorporated into the proposed self-adaptive probabilistic neural network with the aim of further\nimproving its performance and then two versions of simplified self-adaptive probabilistic neural network (simplified selfadaptive\nprobabilistic neural networks 1 and 2) are proposed. The variants of self-adaptive probabilistic neural networks\nare further applied to solve the transformer fault diagnosis problem. By comparing them with basic probabilistic neural\nnetwork, and the traditional back propagation, extreme learning machine, general regression neural network, and self adaptive\nextreme learning machine, the results have experimentally proven that self-adaptive probabilistic neural networks\nhave a more accurate prediction and better generalization performance when addressing the transformer fault\ndiagnosis problem....
The ability to predict the activities of users is an important one for recommender systems\nand analyses of social media. User activities can be represented in terms of relationships\ninvolving three or more things (e.g. when a user tags items on a webpage or tweets about\na location he or she visited). Such relationships can be represented as a tensor, and tensor\nfactorization is becoming an increasingly important means for predicting users� possible\nactivities. However, the prediction accuracy of factorization is poor for ambiguous and/or\nsparsely observed objects. Our solution, Semantic Sensitive Tensor Factorization (SSTF),\nincorporates the semantics expressed by an object vocabulary or taxonomy into the tensor\nfactorization. SSTF first links objects to classes in the vocabulary (taxonomy) and resolves\nthe ambiguities of objects that may have several meanings. Next, it lifts sparsely observed\nobjects to their classes to create augmented tensors. Then, it factorizes the original tensor\nand augmented tensors simultaneously. Since it shares semantic knowledge during the\nfactorization, it can resolve the sparsity problem. Furthermore, as a result of the natural\nuse of semantic information in tensor factorization, SSTF can combine heterogeneous and\nunbalanced datasets from different Linked Open Data sources. We implemented SSTF in the\nBayesian probabilistic tensor factorization framework. Experiments on publicly available\nlarge-scale datasets using vocabularies from linked open data and a taxonomy from\nWordNet show that SSTF has up to 12% higher accuracy in comparison with state-of-the-art\ntensor factorization methods....
Loading....