Current Issue : April - June Volume : 2011 Issue Number : 2 Articles : 4 Articles
The Deuteronomy system supports efficient and scalable ACID transactions in the cloud by decomposing functions of a database storage engine kernel into: (a) a transactional component (TC) that manages transactions and their ââ?¬Å?logicalââ?¬Â concurrency control and undo/redo recovery, but knows nothing about physical data location and (b) a data component (DC) that maintains a data cache and uses access methods to support a record-oriented interface with atomic operations, but knows nothing about transactions. The Deuteronomy TC can be applied to data anywhere (in the cloud, local, etc.) with a variety of deployments for both the TC and DC. In this paper, we describe the architecture of our TC, and the considerations that led to it. Preliminary experiments using an adapted TPC-W workload show good performance supporting ACID transactions for a wide range of DC latencies....
Remote rural areas are constrained by lack of reliable power supply, essential for setting up advanced IT infrastructure as servers or storage; therefore, cloud computing comprising an Infrastructure-as-a-Service (IaaS) is well suited to provide such IT infrastructure in remote rural areas. Additional cloud layers of Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS) can be added above IaaS. Cluster-based IaaS cloud can be set up by using open-source middleware Eucalyptus in data centres of NIC. Data centres of the central and state governments can be integrated with State Wide Area Networks and NICNET together to form the e-governance grid of India. Web service repositories at centre, state, and district level can be built over the national e-governance grid of India. Using Globus Toolkit, we can achieve stateful web services with speed and security. Adding the cloud layer over the e-governance grid will make a grid-cloud environment possible through Globus Nimbus. Service delivery can be in terms of web services delivery through heterogeneous client devices. Data mining using Weka4WS and DataMiningGrid can produce meaningful knowledge discovery from data. In this paper, a plan of action is provided for the implementation of the above proposed architecture....
Background:\r\nChromatin immunoprecipitation (ChIP), coupled with massively parallel short-read sequencing (seq) is used to probe chromatin dynamics. Although there are many algorithms to call peaks from ChIP-seq datasets, most are tuned either to handle punctate sites, such as transcriptional factor binding sites, or broad regions, such\r\nas histone modification marks; few can do both. Other algorithms are limited in their configurability, performance on large data sets, and ability to distinguish closely-spaced peaks.\r\nResults:\r\nIn this paper, we introduce PeakRanger, a peak caller software package that works equally well on punctate and broad sites, can resolve closely-spaced peaks, has excellent performance, and is easily customized. In addition, PeakRanger can be run in a parallel cloud computing environment to obtain extremely high performance\r\non very large data sets. We present a series of benchmarks to evaluate PeakRanger against 10 other peak callers, and demonstrate the performance of PeakRanger on both real and synthetic data sets. We also present real world\r\nusages of PeakRanger, including peak-calling in the modENCODE project.\r\nConclusions: \r\nCompared to other peak callers tested, PeakRanger offers improved resolution in distinguishing extremely closely-spaced peaks. PeakRanger has above-average spatial accuracy in terms of identifying the precise location of binding events. PeakRanger also has excellent sensitivity and specificity in all benchmarks evaluated. In addition, PeakRanger offers significant improvements in run time when running on a single processor system, and very marked improvements when allowed to take advantage of the MapReduce parallel environment offered by a\r\ncloud computing resource. PeakRanger can be downloaded at the official site of modENCODE project: http://www. modencode.org/software/ranger/...
Background\nAdvanced technical systems and analytic methods promise to provide policy makers with information to help them recognize the consequences of alternative courses of action during pandemics. Evaluations still show that response programs are insufficiently supported by information systems. This paper sets out to derive a protocol for implementation of integrated information infrastructures supporting regional and local pandemic response programs at the stage(s) when the outbreak no longer can be contained at its source.\nMethods\nNominal group methods for reaching consensus on complex problems were used to transform requirements data obtained from international experts into an implementation protocol. The analysis was performed in a cyclical process in which the experts first individually provided input to working documents and then discussed them in conferences calls. Argument-based representation in design patterns was used to define the protocol at technical, system, and pandemic evidence levels.\nResults\nThe Protocol for a Standardized information infrastructure for Pandemic and Emerging infectious disease Response (PROSPER) outlines the implementation of information infrastructure aligned with pandemic response programs. The protocol covers analyses of the community at risk, the response processes, and response impacts. For each of these, the protocol outlines the implementation of a supporting information infrastructure in hierarchical patterns ranging from technical components and system functions to pandemic evidence production.\nConclusions\nThe PROSPER protocol provides guidelines for implementation of an information infrastructure for pandemic response programs both in settings where sophisticated health information systems already are used and in developing communities where there is limited access to financial and technical resources. The protocol is based on a generic health service model and its functions are adjusted for community-level analyses of outbreak detection and progress, and response program effectiveness. Scientifically grounded reporting principles need to be established for interpretation of information derived from outbreak detection algorithms and predictive modeling....
Loading....