Abstract

Large-scale task processing for big data based on cloud computing has become a research hotspot nowadays. Many traditional task processing approaches in single domain based on cloud computing have been presented successively. Unfortunately, it is limited to some extent due to the type, price, and storage location of substrate resource. Based on this argument, a large-scale task processing approach for big data in multi-domain has been proposed in this work. While the serious problem of overheads in computation and data transmission still exists in task processing across multi-domain, to overcome this problem, a virtual network mapping algorithm based on multi-objective particle swarm optimization in multi-domain is proposed. Based on Pareto dominance theory, a fast non-dominated selection method for the optimal virtual network mapping scheme set is presented and crowding degree comparison method is employed for the final optimal mapping scheme, which contributes to the load balancing and minimization of bandwidth resource cost in data transmission. Cauchy mutation is introduced to accelerate convergence of the algorithm. Eventually, the large-scale tasks are processed efficiently. Experimental results show that the proposed approach can effectively reduce the additional consumption of computing and bandwidth resources, and greatly decrease the task processing time.

Highlights

  • The rapid development of big data[1,2] has brought us many opportunities and challenges in recent years

  • We present a novel method for fast acquisition of the optimal virtual network mapping scheme set on the basis of a fast and effective non-dominated selection method based on Pareto dominance theory.[11]

  • In this article, aiming at achieving the large-scale task processing effectively, we have presented a virtual network mapping algorithm in multi-domain based on multi-objective particle swarm optimization

Read more

Summary

Introduction

The rapid development of big data[1,2] has brought us many opportunities and challenges in recent years. While the problem of large-scale task analyzing and handling has become one of the vital contents in big data, the large-scale task processing approach based on cloud computing environment[3,4] is an important way to solve the problem of analyzing and handling tasks for big data. A fast and efficient task processing approach can realize the reasonable allocation of tasks and the effective utilization of resources, which ensures the cooperative work of each task processing node in the cloud computing environment. The load imbalance in the task processing is likely to appear due to the limitation of some certain traditional task processing approaches and the performance variation of each resource in heterogeneous distributed system environment[5] under normal circumstances, which will seriously affect the overall. A fast and efficient task processing approach is crucial to the solution of large-scale task processing problem

Objectives
Methods
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.