Abstract

This special issue of Concurrency and Computation: Practice and Experience contains selected high-quality papers from the 2011 International Conference on Cloud and Green Computing (CGC2011) which was held on December 11–13, 2011 in Sydney, Australia 1. The CGC conference series aims to provide an international forum for the presentation and discussion of research and development trends regarding cloud and green computing. CGC2011 attracted many international attendants, allowing deep discussion and the exchange of ideas and results related to ongoing research among attendants. Many research and development efforts have been made in the field of cloud and green computing such as 2-11. More and more people from different areas are trying to facilitate the techniques from their respective areas to tackle tough issues in cloud and green computing such as resource scheduling, security and privacy, service provision, power aware computation and storage, and data service query issues. This special issue aims to accommodate a range of papers from different perspectives and areas to provide some different views and hints for cloud and green computing research. This special issue contains eight papers based on those that were presented at CGC2011. They are listed as 12-19. Research problems in these papers have been analyzed systematically, and for specific approaches or models, evaluation has been performed to demonstrate their feasibility and advantages. The papers were selected on this basis and also peer reviewed thoroughly. They are summarized in the succeeding texts. Paper 12 develops an adaptive service selection method for cross-cloud service composition. It can dynamically select proper services with near-optimal performance for adapting to changes in time. A case study is presented to demonstrate the performance. Paper 13 attempts to identify the role of contextual properties of enterprise systems architecture in relation to service migration to cloud computing. It points out that cloud computing requires consumers to relinquish their ownership of and control over most architectural elements to cloud providers. The simulation is conducted to evaluate the feasibility of the proposed method. Paper 14 proposes an economic and energy aware cloud cost model in this regard. The model supports the decision-making process to be applied with business cases and enables cloud consumers and cloud providers to define their own business strategies and to analyze the respective impact on their business. Paper 15 focuses on latency in global cloud service provision. This paper investigates if latency in terms of simple ping measurements can be used as an indicator for other QoS parameters such as jitter and throughput. Corresponding experiments are conducted to demonstrate performance. Paper 16 presents a number of policies that can be applied to multiuse clusters where computers are shared between interactive users and high throughput computing. The paper also evaluates policies by trace-driven simulations to determine the effects on power consumed by the high throughput workload and impact on high throughput users. The experiment results demonstrate significant power saving with proposed policies. Paper 17 designs an efficient data and task co-scheduling strategy for scheduling datasets and tasks together. Simulation was conducted on the well-known Tianhe supercomputer platform. Simulation results demonstrate that the proposed strategy can effectively improve workflows performance while reducing the total volume of data transfer across data centers. Paper 18 makes an effort in this area of monitoring resources with QoS in cloud environment. The paper then designs a heuristic QoS measurement constructed with domain-based information model. Details are presented in the paper. The experiments are conducted in an implemented portal. Paper 19 addresses privacy preservation of big data on cloud. The paper proposes a scalable cost-effective framework to efficiently preserve privacy. The motivation is that existing approaches have not considered large-scale cloud environment with distributed big data processing, hence, not scalable for large-scale distributed processing of big data on cloud, which is a natural feature of big data. Corresponding details and evaluation are presented.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call