Abstract

The ongoing COVID-19 pandemic has resulted in the loss of lives and economic losses. In this scenario, social distancing is the only way to protect ourselves. In such a scenario, to boost the economy, a large number of industries and businesses have shifted their system to cloud, for example education, shipping, training and many more globally. To support this transition cloud services are the only solution to provide reliable and secure services to the user to sustain their business. Due to this, the load on the existing cloud infrastructure has drastically increased. So it is the responsibility of the cloud to manage the load on the existing infrastructure to maintain reliability and provide high-quality services to the user. Task allocation in the cloud is one of the key features to optimize the performance of cloud infrastructure. In this work, we have proposed a prediction-based technique using a pre-trained neural network to find a reliable resource for a task based on previous training and the history of cloud and its performance to optimize the performance in overloaded and underloaded situations. The main aim of this work is to reduce faults and provide high performance by reducing scheduling time, execution time, average start time, average finish time and network load. The proposed model uses the Big Bang–Big Crunch algorithm to generate huge datasets for training our neural model. The accuracy of the BB–BC ANN model is improved with 98% accuracy.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.