Frequency: Quarterly E- ISSN: 2277-8268 P- ISSN: Awaited Abstracted/ Indexed in: CNKI Scholar (China National Knowledge Infrastructure), Ulrichâ??s International Periodical Directory, Google Scholar, SCIRUS, getCITED, Genamics JournalSeek
â??Inventi Rapid/Impact: Software Engineeringâ?? is a peer reviewed journal under Engineering & Technology. It invites articles from academicians, practicing engineers, and also from new generation college-dropped-out-computers-geeks. The journal aims to process the manuscripts without bothering the academic credentials or affiliation of the author.
The study treats a specific technological approach for the elaboration of small manufacturing\nseries of highly precise hyperboloid gears with small module of the teeth\nand with not big dimensions of the gear mechanism. It is based on the application of\nthe elaborated by authors mathematical models, algorithms and computer programs\nfor synthesis upon a pitch contact point and upon a mesh region. A special feature of\nthe established approach is the application of 3D software prototyping and 3D\nprinting of the designed transmissions. The presented here models of the transmissions\nwith crossed axes and face mated gears are indented for implementation into\nthe driving of two type robots: bio-robot hand and walking robot with four insect-\ntype legs....
In our previous work, we proposed wavelet shrinkage estimation (WSE) for nonhomogeneous Poisson process (NHPP)-based\r\nsoftware reliability models (SRMs), where WSE is a data-transform-based nonparametric estimation method. Among many\r\nvariance-stabilizing data transformations, the Anscombe transform and the Fisz transform were employed. We have shown that\r\nit could provide higher goodness-of-fit performance than the conventional maximum likelihood estimation (MLE) and the\r\nleast squares estimation (LSE) in many cases, in spite of its non-parametric nature, through numerical experiments with real\r\nsoftware-fault count data.With the aim of improving the estimation accuracy ofWSE, in this paper we introduce other three data\r\ntransformations to preprocess the software-fault count data and investigate the influence of different data transformations to the\r\nestimation accuracy ofWSE through goodness-of-fit test...
The advent of technology has opened unprecedented opportunities in health\ncare delivery system as the demand for intelligent and knowledge-based systems\nhas increased as modern medical practices become more knowledge-intensive.\nAs a result of this, there is greater need to investigate the pervasiveness of software\nfaults in Safety critical medical systems for proper diagnosis. The sheer\nvolume of code in these systems creates significant concerns about the quality\nof the software. The rate of untimely deaths nowadays is alarming partly due\nto the medical device used to carry out the diagnosis process. A safety-critical\nmedical (SCM) system is a complex system in which the malfunctioning of\nsoftware could result in death, injury of the patient or damage to the environment.\nThe malfunctioning of the software could be as a result of the inadequacy\nin software testing due to test suit problem or oracle problem. Testing a SCM\nsystem poses great challenges to software testers. One of these challenges is the\nneed to generate a limited number of test cases of a given regression test suite\nin a manner that does not compromise its defect detection ability. This paper\npresents a novel five-stage fault-based testing procedure for SCM, a model-based\napproach to generate test cases for differential diagnosis of Tuberculosis. We\nused Prime Path Coverage and Edge-Pair Coverage as coverage criteria to ensure\nmaximum coverage to identify feasible paths. We analyzed the proposed\ntesting procedure with the help of three metrics consisting of Fault Detection\nDensity, Fault Detection Effectiveness and Mutation Adequacy Score. We evaluated\nthe effectiveness of our testing procedure by running the suggested test\ncases on a sample historical data of tuberculosis patients. The experimental results\nshow that our developed testing procedure has some advantages such as\ncreating mutant graphs and Fuzzy Cognitive Map Engine while resolving the\nproblem of eliminating infeasible test cases for effective decision making....
Implementing artificial neural networks is commonly achieved via high-level programming languages such as Python and easy-touse\ndeep learning libraries such as Keras. These software libraries come preloaded with a variety of network architectures, provide\nautodifferentiation, and support GPUs for fast and efficient computation. As a result, a deep learning practitioner will favor training a\nneural network model in Python, where these tools are readily available. However, many large-scale scientific computation projects\nare written in Fortran, making it difficult to integrate with modern deep learning methods. To alleviate this problem, we introduce a\nsoftware library, the Fortran-Keras Bridge (FKB). This two-way bridge connects environments where deep learning resources are\nplentiful with those where they are scarce. The paper describes several unique features offered by FKB, such as customizable layers,\nloss functions, and network ensembles. The paper concludes with a case study that applies FKB to address open questions about the\nrobustness of an experimental approach to global climate simulation, in which subgrid physics are outsourced to deep neural network\nemulators. In this context, FKB enables a hyperparameter search of one hundred plus candidate models of subgrid cloud and\nradiation physics, initially implemented in Keras, to be transferred and used in Fortran. Such a process allows the modelâ??s emergent\nbehavior to be assessed, i.e., when fit imperfections are coupled to explicit planetary-scale fluid dynamics. The results reveal a\npreviously unrecognized strong relationship between offline validation error and online performance, in which the choice of the\noptimizer proves unexpectedly critical. This in turn reveals many new neural network architectures that produce considerable\nimprovements in climate model stability including some with reduced error, for an especially challenging training dataset....
This paper presents a bottom-up approach for a multiview measurement of statechart size, topological properties, and internal\r\nstructural complexity for understandability prediction and assurance purposes. It tackles the problem at different conceptual depths\r\nor equivalently at several abstraction levels. The main idea is to study and evaluate a statechart at different levels of granulation\r\ncorresponding to different conceptual depth levels or levels of details. The higher level corresponds to a flat process view diagram\r\n(depth = 0), the adequate upper depth limit is determined by the modelers according to the inherent complexity of the problem\r\nunder study and the level of detail required for the situation at hand (it corresponds to the all states view). For purposes of\r\nmeasurement, we proceed using bottom-up strategy starting with all state view diagram, identifying and measuring its deepest\r\ncomposite states constituent parts and then gradually collapsing them to obtain the next intermediate view (we decrement depth)\r\nwhile aggregating measures incrementally, until reaching the flat process view diagram. To this goal we first identify, define, and\r\nderive a relevant metrics suite useful to predict the level of understandability and other quality aspects of a statechart, and then we\r\npropose a fuzzy rule-based system prototype for understandability prediction, assurance, and for validation purposes....
This paper describes the development of an application for mobile devices under the iOS\nplatform which has the objective of monitoring patients with alterations or affections from cardiac\npathologies. The software tool developed for mobile devices provides a patient and a specialist\ndoctor the ability to handle and treat disease remotely while monitoring through the technique of\nnon-contact photoplethysmography (PPG). The mobile application works by processing red, green,\nand blue (RGB) color video images on a specific region of the face, thus obtaining the intensity of the\npixels in the green channel. The results are then processed using mathematical algorithms and Fourier\ntransform, moving from the time domain to the frequency domain to ensure proper interpretation and\nto obtain the pulses per minute (PPM). The results are favorable because a comparison of the results\nwas made with respect to the application of a medical-grade pulse-oximeter, where an error rate of 3%\nwas obtained, indicating the acceptable performance of our application. The present technological\ndevelopment provides an application tool with significant potential in the area of health....
With the rapid development and wide application ofmultimedia technology, the demand for the actual development ofmultimedia\nsoftware inmany industries is increasing.How to measure and improve the quality ofmultimedia software is an important problem\nto be solved urgently. In order to calculate the complicated situation and fuzziness of software quality, this paper introduced a\nsoftware quality evaluationmodel based on the fuzzymatter element by using amethod known as the fuzzymatter element analysis,\ncombined with the TOPSIS method and the close degree. Compared with the existing typical software measurementmethods, the\nresults are basically consistent with the typical softwaremeasurement results.Then, Pearson simple correlation coefficientwas used\nto analyse the correlation between the existing four measurement methods and the metric of practical experience, whose results\nshow that the results of software quality measures based on fuzzy matter element aremore in accordance with practical experience.\nMeanwhile, the results of this method are muchmore precise than the results of the other measurementmethods....
Software trustworthiness is an important research field in software engineering.\nIn order to appropriately evaluate it, some different measurement approaches have been proposed,\nwhich have important guiding significance for improving software trustworthiness. Recently, we have\ninvestigated attributes-based approaches. That is, how to maximize trustworthy degree of some\nsoftware satisfying a given threshold by adjusting every attribute value such that the cost is minimal,\ni.e., the sum of all attribute values is as small as possible. The work is helpful to improve the software\nquality under the same cost. This paper continues this work and considers a reallocation approach to\ndealing with the problem that the threshold and the minimal constraints of every attribute values\ndynamically increase. In this process, the costs of trustworthiness improvement should be ensured to\nbe minimal. For this purpose, we firstly define a reallocation model by mathematical programming.\nThen we introduce the notion of growth function. Based on this, a polynomial reallocation algorithm\nis designed to solve the above reallocation model. Finally, we verify our work on spacecraft softwares\nand the results show that this work is valid....
This paper describes recent developments of the\nSiebog agent middleware regarding performance. This middleware\nsupports both server-side and client-side agents.\nServer side agents exist as EJB session beans on the JavaEE\napplication server, while client-side agents exist as JavaScript\nWorker objects in the browser. Siebog employs enterprise\ntechnologies on the server side to provide automatic agent\nload-balancing and fault-tolerance. Onthe client side this distributed\narchitecture relies on HTML5 and related standards\nto support smooth running on a wide variety of hardware\nand software platforms. Such architecture supports rather\neasy, reliable and efficient communication, interaction, and\ncoexistence between numerous agents. With the automatic\nclustering and state persistence, Siebog can support thousands\nof server-side agents, as well as thousands of external\ndevices hosting tens of client-side agents. Performed and presented\nexperiments showed promising results for real life\napplications of our architecture....
With the increasing application of advanced video coding (H.264/AVC) in the multimedia field, a great significance to research in\nvideo watermarking based on this video compression standard has been established. We propose a semifragile video watermarking\nalgorithm, which can simultaneously implement frame attack and video tamper detection, herein. In this paper, the frame number\nis selected as the watermark information, and the relationship of the discrete cosine transform (DCT) nonzero coefficients is used as\nthe authentication code. The 4 * 4 subblocks, whose DCT nonzero coefficients are sufficiently complex, are selected to embed the\nwatermark. The parities of these nonzero coefficients in the medium frequency are modulated to embed watermarks. The\nexperimental results show that the visual quality of the embedded watermarked video is virtually unaffected, and the algorithm\nexhibits good robustness. Furthermore, the algorithm can correctly implement frame attack and video tamper detection....
Loading....