Frequency: Quarterly E- ISSN: 2230-813X P- ISSN: 2249-1309 Abstracted/ Indexed in: Ulrich's International Periodical Directory, Google Scholar, SCIRUS, Genamics JournalSeek, EBSCO Information Services
Quarterly published in print and online "Inventi Impact: Cloud Computing" publishes high quality unpublished as well as high impact pre-published research and reviews catering to the needs of researchers and professionals. The journal covers all the advances in the growing field of cloud computing. Articles pertaining to following areas are particularly welcome: cloud architectures, infrastructures and workflows; cloud storage and data distribution; cloud messaging and database systems; virtual containers and portable applications; cloud security, trust and governance; migrating between grids and clouds; service oriented cloud architectures; evaluation of private, public and hybrid clouds and cloud bursting; cloud ecosystems etc.
Courseware must work. In this work, we verify the synthesis of 128 bit architectures,\r\nwhich embodies the intuitive principles of hardware and architecture. Our objective\r\nhere is to set the record straight. In order to realize this objective, we disprove not only\r\nthat expert systems and evolutionary programming can agree to answer this riddle, but that the same is true for superpages [1]....
Increasingly infrastructure providers are supplying the cloud marketplace with storage and on-demand compute\r\nresources to host cloud applications. From an application user�s point of view, it is desirable to identify the most\r\nappropriate set of available resources on which to execute an application. Resource choice can be complex and may\r\ninvolve comparing available hardware specifications, operating systems, value-added services (such as network\r\nconfiguration or data replication) and operating costs (such as hosting cost and data throughput). Providers� cost\r\nmodels often change and new commodity cost models (such as spot pricing) can offer significant savings. In this\r\npaper, a software abstraction layer is used to discover the most appropriate infrastructure resources for a given\r\napplication, by applying a two-phase constraints-based approach to a multi-provider cloud environment. In the first\r\nphase, a set of possible infrastructure resources is identified for the application. In the second phase, a suitable\r\nheuristic is used to select the most appropriate resources from the initial set. For some applications a cost-based\r\nheuristic may be most appropriate; for others a performance-based heuristic may be of greater relevance. A financial\r\nservices application and a high performance computing application are used to illustrate the execution of the\r\nproposed resource discovery mechanism. The experimental results show that the proposed model can dynamically\r\nselect appropriate resouces for an application�s requirements....
Aiming at problems such as slow training speed, poor prediction effect, and unstable detection results of traditional anomaly detection algorithms, a data mining method for anomaly detection based on the deep variational dimensionality reduction model and MapReduce (DMAD-DVDMR) in cloud computing environment is proposed. First of all, the data are preprocessed by a dimensionality reduction model based on deep variational learning and based on ensuring complete data information as much as possible, the dimensionality of the data is reduced, and the computational pressure is reduced. Secondly, the data set stored on the Hadoop Distributed File System (HDFS) is logically divided into several data blocks, and the data blocks are processed in parallel through the principle of MapReduce, so the k-distance and LOF value of each data point can only be calculated in each block. Thirdly, based on stochastic gradient descent, the concept of k-neighboring distance is redefined, thus avoiding the situation where there are greater than or equal to k-repeated points and infinite local density in the data set. Finally, compared with CNN, DeepAnt, and SVM-IDS algorithms, the accuracy of the scheme is increased by 10.3%, 18.0%, and 17.2%, respectively. The experimental data set verifies the effectiveness and scalability of the proposed DMAD-DVDMR algorithm....
Considering the widespread use of mobile devices and the increased performance requirements of mobile users, shifting the complex\ncomputing and storage requirements of mobile terminals to the cloud is an effective way to solve the limitation of mobile terminals,\nwhich has led to the rapid development of mobile cloud computing. How to reduce and balance the energy consumption of mobile\nterminals and clouds in data transmission, as well as improve energy efficiency and user experience, is one of the problems that green\ncloud computing needs to solve. This paper focuses on energy optimization in the data transmission process of mobile cloud\ncomputing. Considering that the data generation rate is variable, because of the instability of the wireless connection, combined with\nthe transmission delay requirement, a strategy based on the optimal stopping theory to minimize the average transmission energy of\nthe unit data is proposed. By constructing a data transmission queue model with multiple applications, an admission rule that is\nsuperior to the top candidates is proposed by using secretary problem of selecting candidates with the lowest average absolute\nranking. Then, it is proved that the rule has the best candidate. Finally, experimental results show that the proposed optimization\nstrategy has lower average energy per unit of data, higher energy efficiency, and better average scheduling period....
Cloud computing gives users much freedom on where they host their computation and storage. However the CO2\r\nemission of a job depends on the location and the energy efficiency of the data centers where it is run. We developed\r\na decision framework that determines to move computation with accompanying data from a local to a greener\r\nremote data center for lower CO2 emissions. The model underlying the framework accounts for the energy\r\nconsumption at the local and remote sites, as well as of networks among them. We showed that the type of network\r\nconnecting the two sites has a significant impact on the total CO2 emission. Furthermore, the task�s complexity is a\r\nfactor in deciding when and where to move computation....
Mobile edge computing (MEC) has produced incredible outcomes in the context of computationally intensive mobile applications by offloading computation to a neighboring server to limit the energy usage of user equipment (UE). However, choosing a pool of application components to offload in addition to the volume of data transfer along with the latency in communication is an intricate issue. In this article, we introduce a novel energy-efficient offloading scheme based on deep neural networks. The proposed scheme trains an intelligent decision-making model that picks a robust pool of application components. The selection is based on factors such as the remaining UE battery power, network conditions, the volume of data transfer, required energy by the application components, postponements in communication, and computational load. We have designed the cost function taking all the mentioned factors, get the cost for all conceivable combinations of component offloading decisions, pick the robust decisions over an extensive dataset, and train a deep neural network as a substitute for the exhaustive computations associated. Model outcomes illustrate that our proposed scheme is proficient in the context of accuracy, root mean square error (RMSE), mean absolute error (MAE), and energy usage of UE....
This paper examines the challenges to collaboration among responding entities and proposes a technology-enabled self-synchronization framework for sharing information using a distributed, highly scalable, web application based on the cloud computing technologies. The proposed design applies the unique benefits of cloud computing to the disaster response domain. This notional design facilitates communication among a broad range of public and private groups without requiring these organizations to compromise competitive advantage. During disaster response, key resources are supplied from a variety of channels including federal, state and local governments; charity and nongovernmental organizations; and commercial businesses. Disaster relief efforts require that response teams work together in a cohesive manner before, during, and after a disaster. During the first 48 to 72 hours, when both survivors and responders are likely to be disoriented, response efforts are less effective....
This paper does a systematic review of the possible design space for cloud-hosted applications that may have\nchanging resource requirements that need to be supported through dynamic service level agreements (SLAs). The\nfundamental SLA functions are reviewed: Admission Control, Monitoring, SLA Evaluation, and SLA Enforcement ââ?¬â?? a\nclassic autonomic control cycle. This is followed by an investigation into possible application requirements and SLA\nenforcement mechanisms. We then identify five basic Load Types that a dynamic SLA system must manage: Best\nEffort, Throttled, Load Migration, Preemption and Spare Capacity. The key to meeting application SLA requirements\nunder changing surge conditions is to also manage the spare surge capacity. The use of this surge capacity could be\nmanaged by one of several identified load migration policies. A more detailed SLA architecture is presented that\ndiscusses specific SLA components. This is done in the context of the OpenStack since it is open source with a known\narchitecture. Based on this SLA architecture, a research and development plan is presented wherein fundamental\nissues are identified that need to be resolved through research and experimentation. Based on successful outcomes,\nfurther developments are considered in the plan to produce a complete, end-to-end dynamic SLA capability.\nExecuting on this plan will take significant resources and organization. The NSF Center for Cloud and Autonomic\nComputing is one possible avenue for pursuing these efforts. Given the growing importance of cloud performance\nmanagement in the wider marketplace, the cloud community would be well-served to coordinate cloud SLA\ndevelopment across organizations such as the IEEE, Open Grid Forum, and the TeleManagement Forum....
Market-oriented reverse auction is an efficient and cost-effective method for resource allocation in cloud workflow systems since\nit can dynamically allocate resources depending on the supply-demand relationship of the cloud market. However, during the\nauction the price of cloud resource is usually fixed, and the current resource allocation mechanisms cannot adapt to the changeable\nmarket properly which results in the low efficiency of resource utilization. To address such a problem, a dynamic pricing reverse\nauction-based resource allocation mechanism is proposed. During the auction, resource providers can change prices according to\nthe trading situation so that our novel mechanism can increase the chances of making a deal and improve efficiency of resource\nutilization. In addition, resource providers can improve their competitiveness in the market by lowering prices, and thus users\ncan obtain cheaper resources in shorter time which would decrease monetary cost and completion time for workflow execution.\nExperiments with different situations and problem sizes are conducted for dynamic pricing-based allocation mechanism (DPAM)\non resource utilization and the measurement of TimeâË?â??Cost (TC).The results show that our DPAM can out per form its representative\nin resource utilization, monetary cost, and completion time and also obtain the optimal price reduction rates....
One of the major benefits of cloud computing is the ability for users to access resources on a pay-as-you go basis,\r\nthereby potentially reducing their costs and enabling them to scale applications rapidly. However, this approach\r\ndoes not necessarily benefit the provider. Providers have the responsibility of ensuring that they have the physical\r\ninfrastructure to meet their users� demand and that their performance meets agreed service level agreements.\r\nWithout an accurate view of future demand, planning for variable costs such as staff, replacement servers or\r\ncoolers, and electricity supplies, can all be very difficult, and optimising the distribution of virtual machines presents\r\na major challenge.\r\nHere, we explore an extension of an approach first proposed in a theoretical study by Wu, Zhang, & Huberman\r\nwhich we refer to as the WZH model. The WZH model utilises a third-party intermediary, the Coordinator, who uses\r\na variety of cloud assets to deliver resources to clients at a reduced price, while making a profit and assisting the\r\nprovider(s) in resource forecasting. The Coordinator acts as a broker.\r\nUsers purchase resources in advance from the broker using a form of financial derivative contract called an option.\r\nThe broker uses the uptake of these options contracts to decide if it should invest in buying resource access for an\r\nextended period; the resources can then subsequently be provided to clients who demand it.\r\nWe implement an extension of the WZH model in an agent-based simulation, using asset classes and price-levels\r\ndirectly modelled on currently available real-world data from markets relevant to cloud computing, for both\r\nservice-providers provisioning and customers� demand patterns. We show that the broker profits in all market\r\nconditions simulated, and can increase her profit by up to 36% by considering past performance when deciding to\r\ninvest in reserved instances. Furthermore, we show that the broker can increase profits by up to 33% by investing\r\nin 36-month instances over 12-month. By considering past performance and investing in longer term reserved\r\ninstances, the broker can increase her profit by up to 44% for the same market conditions....
Loading....