Frequency: Quarterly E- ISSN: 2277-632X P- ISSN: Awaited Abstracted/ Indexed in: Ulrich's International Periodical Directory, Google Scholar, SCIRUS, Genamics JournalSeek, EBSCO Information Services
Quality and Reliability are the characteristics that are looked upon in virtually every system while statistics plays a tool in assessing the two. The "Inventi Impact: Quality, Statistics & Reliability" has a mandate to providing publishing space for all the advancements happening in achieving the quality, ensuring the reliability, and in statistical tools, across all the disciplines. The journal, hence, is multidisciplinary and it invites research and review articles from academician as well as practicing professionals from all the fields.
The Pareto distribution is a heavy-tailed distribution with many applications in the real world. The tail of the distribution is\r\nimportant, but the threshold of the distribution is difficult to determine in some situations. In this paper we consider two realworld\r\nexamples with heavy-tailed observations, which leads us to propose a mixture truncated Pareto distribution (MTPD) and\r\nstudy its properties.We construct a cluster truncated Pareto distribution (CTPD) by using a two-point slope technique to estimate\r\nthe MTPD from a random sample.We apply the MTPD and CTPD to the two examples and compare the proposed method with\r\nexisting estimation methods. The results of log-log plots and goodness-of-fit tests show that the MTPD and the cluster estimation\r\nmethod produce very good fitting distributions with real-world data....
One of the key issues in robust parameter design is to configure the controllable factors to minimize the variance due to noise variables. However, it can sometimes happen that the number of control variables is greater than the number of noise variables. When this occurs, two important situations arise. One is that the variance due to noise variables can be brought down to zero The second is that multiple optimal control variable settings become available to the experimenter. A simultaneous confidence region for such a locus of points not only provides a region of uncertainty about such a solution, but also provides a statistical test of whether or not such points lie within the region of experimentation or a feasible region of operation. However, this situation requires a confidence region for the multiple-solution factor levels that provides proper simultaneous coverage. This requirement has not been previously recognized in the literature. In the case where the number of control variables is greater than the number of noise variables, we show how to construct critical values needed to maintain the simultaneous coverage rate. Two examples are provided as a demonstration of the practical need to adjust the critical values for simultaneous coverage....
Good estimates of the reliability of a system make use of test data and expert knowledge at all available levels. Furthermore, by\r\nintegrating all these information sources, one can determine how best to allocate scarce testing resources to reduce uncertainty.\r\nBoth of these goals are facilitated by modern Bayesian computational methods. We demonstrate these tools using examples that\r\nwere previously solvable only through the use of ingenious approximations, and employ genetic algorithms to guide resource\r\nallocation....
A four-parameter family of Weibull distributions is introduced, as an example of a more general class created along the lines\r\nof Marshall and Olkin, 1997. Various properties of the distribution are explored and its usefulness in modelling real data is\r\ndemonstrated using maximum likelihood estimates....
We recall and study some properties of a known functional operating on the set of n-copulas and\r\ndetermine conditions under such functional is well defined on the set of n-quasi-copulas. As a\r\nconsequence, new families of copulas and quasi-copulas are defined, illustrating our results with\r\nseveral examples....
?is work proposes a new methodology for the management of event tree information used in the quantitative risk assessment of\r\ncomplex systems. ?e size of event trees increases exponentially with the number of system components and the number of states\r\nthat each component can be found in. ?eir reduction to a manageable set of events can facilitate risk quanti??cation and safety\r\noptimization tasks. ?e proposed method launches a deductive exploitation of the event space, to generate reduced event trees for\r\nlarge multistate systems. ?e approach consists in the simultaneous treatment of large subsets of the tree, rather than focusing on\r\nthe given single components of the system and getting trapped into guesses on their structural arrangement....
A new load-share reliability model of systems under the changeable load is proposed in the paper. It is assumed that the load\nis a piecewise smooth function which can be regarded as an extension of the piecewise constant and continuous functions. The\ncondition of the residual lifetime conservation, which means continuity of a cumulative distribution function of time to failure,\nis accepted in the proposed model. A general algorithm for computing reliability measures is provided. Simple expressions for\ndetermining the survivor functions under assumption of the Weibull probability distribution of time to failure are given. Various\nnumerical examples illustrate the proposed model by different forms of the system load and different probability distributions of\ntime to failure....
Effective control of rehabilitation robots is of paramount importance and requires increased attention to achieve a fully reliable, automated system for practical applications. As the domain of robotic rehabilitation progresses rapidly, the imperative for precise and dependable control mechanisms grows. In this study, we present an innovative control scheme integrating state-of-the-art machine learning algorithms with traditional control techniques. Our approach offers enhanced adaptability to patient-specific needs while ensuring safety and effectiveness. We introduce a model-free feedback linearization control method underpinned by deep neural networks and online observation. While our controller is model-free, and system dynamics are learned during training phases, we employ an online observer to robustly estimate uncertainties that the systems may face in real-time, beyond their training. The proposed technique was tested through different simulations with varying initial conditions and step references, demonstrating the controller’s robustness and adaptability. These simulations, combined with Lyapunov’s stability verification, validate the efficacy of our proposed scheme in effectively controlling the system under diverse conditions....
The multistage queue model was developed for a situation where parallel and\nunrelated queues exist at the first stage only. These queues merged into single\nqueues at the remaining stages. The parallel queues offer services that are different\nfrom one another and customers arrive to join the queue that offer services\nthat they need. The mathematical model was developed assuming an\nM/M/1 queue system and the measures of effectiveness were derived. The\nmodel was applied to solve the problem of customer congestion in a restaurant\nin the city of Ibadan, Nigeria that serves three different local delicacies.\nThe three local delicacies constitute three different queues at the first stage.\nThe second stage consists of only one queue which is for purchase of drinks\nand the third stage which is the last stage is for payment. Every customer in\nthe restaurant passes through the three stages. Utilization factors for the five\nqueues were determined and found to range from 70% to 97%. The average\ntime spent by customers in the system was found to be 543.04 minutes. A simulation\nstudy using what-if scenario analysis was performed to determine\nthe optimum service configuration for the system. The optimum configuration\nreduced average time for customers in the system from 543.04 minutes to\n13.47 minutes without hiring new servers....
Although many algorithms have been proposed to mitigate air turbulence in optical videos, there do not seem to be consistent blind video quality assessment metrics that can reliably assess different approaches. Blind video quality assessment metrics are necessary because many videos containing air turbulence do not have ground truth. In this paper, a simple and intuitive blind video quality assessment metric is proposed. This metric can reliably and consistently assess various turbulent mitigation algorithms for optical videos. Experimental results using more than 10 videos in the literature show that the proposed metrics correlate well with human subjective evaluations. Compared with an existing blind video metric and two other blind image quality metrics, the proposed metrics performed consistently better...
Loading....