Current Issue : January - March Volume : 2012 Issue Number : 1 Articles : 7 Articles
One of the key issues in robust parameter design is to configure the controllable factors to minimize the variance due to noise variables. However, it can sometimes happen that the number of control variables is greater than the number of noise variables. When this occurs, two important situations arise. One is that the variance due to noise variables can be brought down to zero The second is that multiple optimal control variable settings become available to the experimenter. A simultaneous confidence region for such a locus of points not only provides a region of uncertainty about such a solution, but also provides a statistical test of whether or not such points lie within the region of experimentation or a feasible region of operation. However, this situation requires a confidence region for the multiple-solution factor levels that provides proper simultaneous coverage. This requirement has not been previously recognized in the literature. In the case where the number of control variables is greater than the number of noise variables, we show how to construct critical values needed to maintain the simultaneous coverage rate. Two examples are provided as a demonstration of the practical need to adjust the critical values for simultaneous coverage....
This paper deals with the Bayes prediction of the future failures of a deteriorating repairable mechanical system subject to minimal repairs and periodic overhauls. To model the effect of overhauls on the reliability of the system a proportional age reduction model is assumed and the 2-parameter Engelhardt-Bain process (2-EBP) is used to model the failure process between two successive overhauls. 2-EBP has an advantage over Power Law Process (PLP) models. It is found that the failure intensity of deteriorating repairable systems attains a finite bound when repeated minimal repair actions are combined with some overhauls. If such a data is analyzed through models with unbounded increasing failure intensity, such as the PLP, then pessimistic estimates of the system reliability will arise and incorrect preventive maintenance policy may be defined. On the basis of the observed data and of a number of suitable prior densities reflecting varied degrees of belief on the failure/repair process and effectiveness of overhauls, the prediction of the future failure times and the number of failures in a future time interval is found. Finally, a numerical application is used to illustrate the advantages from overhauls and sensitivity analysis of the improvement parameter carried out....
The objective of this paper is to quantify the effect of autocorrelation coefficients, shift magnitude, types of control charts, types of controllers, and types of monitored signals on a control system. Statistical process control (SPC) and automatic process control (APC) were studied under non-stationary stochastic disturbances characterized by the integrated moving average model, ARIMA \r\n( 0 , 1 , 1 )\r\n. A process model was simulated to achieve two responses, mean squared error (MSE) and average run length (ARL). A factorial design experiment was conducted to analyze the simulated results. The results revealed that not only shift magnitude and the level of autocorrelation coefficients, but also the interaction between these two factors, affected the integrated system performance. It was also found that the most appropriate combination of SPC and APC is the utilization of the minimum mean squared error (MMSE) controller with the Shewhart moving range (MR) chart, while monitoring the control signal (X) from the controller. Therefore, integrating SPC and APC can improve process manufacturing, but the performance of the integrated system is significantly affected by process autocorrelation. Therefore, if the performance of the integrated system under non-stationary disturbances is correctly characterized, practitioners will have guidelines for achieving the highest possible performance potential when integrating SPC and APC....
Since Lindley and Smith introduced the idea of hierarchical prior distribution, some results have been obtained on hierarchical Bayesian method to deal with lifetime data. But all those results obtained by means of hierarchical Bayesian methods involve complicated integration compute. Though some computing methods such as Markov Chain Monte Carlo (MCMC) are available, doing integration is still very inconvenient for practical problems. This paper introduces a new method, named E-Bayesian estimation method, to estimate failure probability. In the case of one hyperparameter, the definition of E-Bayesian estimation of the failure probability is provided; moreover, the formulas of E-Bayesian estimation and hierarchical Bayesian estimation and the property of E-Bayesian estimation of the failure probability are also provided. Finally, calculation on practical problems shows that the provided method is feasible and easy to perform....
The reliability function for a parallel system of two identical components is derived from a stress-strength model, where failure of one component increases the stress on the surviving component of the system. TheMaximum Likelihood Estimators of parameters\nand their asymptotic distribution are obtained. Further the Maximum Likelihood Estimator and Bayes Estimator of reliability function are obtained using the data from a life-testing experiment. Computation of estimators is illustrated through simulation study....
This paper considers the estimation problem for the Fr�¨chet distribution under progressive Type II censoring with random removals, where the number of units removed at each failure time has a binomial distribution. We use the maximum likelihood method to obtain the estimators of parameters and derive the sampling distributions of the estimators, and we also construct the confidence intervals for the parameters and percentile of the failure time distribution....
Although failure reporting, analysis, and corrective action system (FRACAS) has two management perspectives, its tasks and related information, the previous researches and applications mainly have focused on the data management. This study is to develop a process-oriented FRACAS which supports the operation of the failure-related activities. The development procedures are (1) to define the reporting and analysis tasks, (2) to define the information to be used at each task, and (3) to design a computerized business process model and set the attributes such as durations, rules, and document types. This computerized FRACAS process can be activated in a business process management system (BPMS) which employs the enactment functions, deliver tasks to the proper workers, provide the necessary information, and alarm the abnormal status of the tasks (delay, incorrect delivery, cancellation). Through implementing the prototype system, improvements are found for automation of the tasks, prevention of disoperation, and real-time activity monitoring....
Loading....