AROF: adaptive resource optimization framework for Kubernetes cluster using workload forecasting
Traditional Kubernetes autoscaling struggles with dynamic workloads, causing SLA violations and inefficiency. We propose AROF: an Adaptive Resource Optimization Framework integrating hybrid workload classification (clustering+tagging), multi-horizon LSTM forecasting, and cost-aware autoscaling with tunable cost-SLA trade-offs. AROF formulates VM provisioning as a constrained optimization problem with quadratic SLA penalties, enabling fine-grained resource management. Extensive evaluation using Alibaba Cloud 2022 traces demonstrates AROF reduces SLA violations by 81% and improves cost efficiency by 22.4% compared to standard Kubernetes autoscalers, while outperforming recent proactive baselines. The framework provides a scalable, interpretable solution for intelligent resource optimization in production Kubernetes environments.
- Conference Article
4
- 10.1109/csnt.2015.156
- Apr 1, 2015
Increased resources utilization from several clients in a smart computing environment poses a key challenge in allocating optimal energy efficient resources at the data center. Allocation of these optimal resources should be carried out in such a manner that we can reduce the energy consumption of the data center and also avoid the service level agreement (SLA) violation. This paper deals with the development of an energy efficient algorithm for optimal resources allocation at the data center using hybrid approach of the Dynamic Voltage Frequency Scaling (DVFS), Genetic algorithm (GA) and Bin Packing techniques. The performance of the proposed hybrid approach is compared with Genetic Algorithm, DVFS with Bin Packing, DVFS without Bin Packing techniques. Experimental results demonstrate that the proposed energy efficient algorithm consumes 22.4% less energy as compared to the DVFS with Bin Packing technique over a specified workload with 0% SLA violation.
- Conference Article
15
- 10.1109/icscn.2015.7219897
- Mar 1, 2015
Increased resources utilization from clients in a smart computing environment poses a greater challenge in allocating optimal energy efficient resources at the data center. Allocation of these optimal resources should be carried out in such a manner that we can save the energy of data center as well as avoiding the service level agreement (SLA) violation. This paper deals with the design of an energy efficient algorithm for optimized resources allocation at data center using combined approach of Dynamic Voltage Frequency Scaling (DVFS) and Genetic algorithm (GA). The performance of the proposed energy efficient algorithm is compared with DVFS. Experimental results demonstrate that the proposed energy efficient algorithm consumes 22.4% less energy over a specified workload with 0% SLA violation.
- Conference Article
3
- 10.1109/icssit46314.2019.8987760
- Nov 1, 2019
Nowadays, the field of Cloud Computing has particularly been becoming an emerging paradigm to deliver services over the internet. The cloud data center needs a set of virtualized resources for providing cloud infrastructure services based on user's demand. The need for optimal resource management turns into a vital issue in cloud data center to reduce the level of resource wastage and Service Level Agreement (SLA) violation. Meta heuristics algorithms are well suitable to resolve this issue which comes under NP-hard problem. The paper, proposes a Multi-Objective Hybrid fruit fly Optimization (MOHFO) based scheme for SLA aware dynamic resource management in cloud data center. The Bald Eagle Search (BES) optimization behaviour is adopted to enhance the searching ability for fruit fly optimization algorithm. The proposed scheme follows a dynamic virtual machine (VM) deployment and consolidation scheme to obtain a trade-off among SLA violation and resource wastage. The proposed scheme is simulated on Cloudsim with different data centre configurations and experimental results are evaluated based on QoS metrics. Moreover, the proposed MOHFO scheme is evaluated and compared with other optimization schemes with respect to resource wastage, energy consumption, communication cost and number of migrations performed in order to provide QoS aware optimal resource provisioning in cloud computing.
- Conference Article
- 10.2118/229638-ms
- Nov 3, 2025
As the energy industry transitions toward sustainable operations, batch and pad drilling have emerged as key strategies to minimize environmental impact while maintaining efficiency. This paper examines how these techniques significantly reduce water usage, chemical consumption, and CO₂ emissions—aligning with The operating company's sustainability and operational optimization objectives. By drilling multiple wells from a single location, this method decreases the need for repeated mobilization and demobilization of equipment, leading to lower fuel consumption and greenhouse gas emissions. Additionally, centralized water and chemical management in pad drilling reduces waste and optimizes resource utilization. Advanced technologies such as closed-loop drilling systems and real-time monitoring further enhance sustainability by minimizing fluid loss and improving drilling accuracy. The integration of digital technologies and advanced planning supports The operating company's broader strategy of decarbonization and resource optimization. Case studies from UAE unconventional reservoirs demonstrate that pad drilling not only supports Net Zero 2050 targets but also improves cost efficiency through reduced downtime and streamlined logistics. Drilling fluid management systems enable 30–50% less freshwater consumption through recycling and closed-loop fluids, while batch chemical handling minimizes waste and improves treatment efficiency. These practices reduce fuel consumption and greenhouse gas emissions (e.g. CO₂) by up to ~40%, lowering the overall carbon footprint of drilling activities. The adoption of advanced technologies—such as automated rigs, real-time monitoring, and AI-driven optimization—further enhances these benefits, ensuring precise drilling with lower resource intensity. This integrated approach not only supports environmental stewardship but also improves cost-effectiveness and operational consistency, marking a critical step toward greener energy production and resource optimization. Through the adoption of batch/pad drilling, The operating company not only enhances operational performance and cost-effectiveness but also underscores its leadership in responsible energy production. This practice supports The operating company's framework, highlights how sustainable drilling can balance economic and environmental performance, and reinforces its role in driving sustainable development within the UAE. It sets a benchmark for the industry and a precedent for environmentally conscious upstream operations across the region.
- Conference Article
7
- 10.2118/174039-ms
- Apr 27, 2015
Production optimization can play a major role in increasing recovery and decreasing operation cost. In many oilfields, the geology, production operations, and their related constraints are very complex. These complexities can complicate the formulation and solution of the pertinent optimization problems and increase the computational cost of finding a solution. Although full reservoir simulation provides detailed analysis and prediction of reservoir performance, the significant uncertainty and complexity of reservoir models can make the simulation results and their interpretations questionable. Moreover, in some cases, a reservoir model may not even be available to perform full simulation for performance optimization. The cost and complexity of developing full-scale simulation models, together with the considerable computational overhead associated with production optimization (especially under geologic uncertainty), call for development of fast proxy models for production optimization. To this end, various reduced-order and surrogate models have been designed to approximate the production behavior of a reservoir at a fraction of the computation required for full simulation. We present an efficient production optimization scheme by integrating constrained optimization with fast decline curve analysis for predicting well production performance. The proposed production optimization approach is formulated as a constrained optimization problem by defining a desired objective function and a set of existing field/facility constraints. An efficient gradient-based optimization algorithm is then adopted to solve the resulting optimization problem for a single timestep. The optimization is then coupled with the decline curve analysis to predict future production rates. The optimization process is performed recursively in time for a specified duration. The predictions with the decline curve analysis are reasonable so long as the operating conditions remain unchanged. Using field data, we demonstrate that the proposed formulation can provide fast solutions to large-scale production optimization problems. The results in this paper suggest that the developed technique can be applied to improve production performance and operation efficiency with a minimal computational cost when compared to production optimization with full-scale reservoir simulation. It also offers the flexibility to adjust the problem formulation under various field conditions and is particularly useful when a full-scale reservoir model does not exist simulate the reservoir response for production optimization.
- Conference Article
6
- 10.2118/180061-ms
- Apr 20, 2016
When analyzing well performance in carbonate reservoirs, the traditional approach usually requires the best practices from pre and post stimulation analysis. Most techniques require an understanding of production performance, which can be divided into two categories. The first is related to reservoir performance away from the wellbore i.e. permeability, fracture network, reservoir pressure, boundaries and secondly, the near wellbore and zonal contribution i.e. permeability-thickness, skin, oil and water influx from individual producing zones. In order to develop a full picture of how these two categories contribute to production performance, a detailed analysis should be conducted to understand their interaction. Low permeability carbonates and chalk fields often require long multi-stage frac'ed horizontal wells which further complicates the analysis due to lack of measured data in each stage. The Ekofisk filed development is a mature water flood, which includes both deviated and horizontal wells. Deviated wells are placed in the more crestal location, while the horizontal wells are generally placed towards the flanks where reservoir properties are of lower quality as compared to the field's crest. Production performance and optimization is largely dependent on efficient zonal stimulation, well and reservoir management. Understanding the distribution of fluid phases along the well, especially the water influx, may enable timely executed water shut-offs to mitigate water breakthrough. The traditional technique of understanding where and how much oil and water are being produced, require well intervention through production logging (PLTs). Well interventions are often difficult to execute due to limited access to platforms, the high cost of wells and production deferments. All of these factors limit efficient production optimization due to the inability to collect data in a timely manner for analysis. Furthermore, experiences from the Ekofisk field indicate that PLT data often gives inconclusive results due to known challenges of interpreting PLT data from horizontal wells. An intervention free and cost efficient approach using inflow tracers has been piloted to acquire early time data, in addition to acquiring well and reservoir understanding throughout the well life. This approach was successfully developed and tested in a newly drilled horizontal Ekofisk field producer. The well was equipped with inflow tracers permanently installed in the completion string to identify individual zone's production contribution including the split by oil, gas and water. In addition, unique intra well tracers were injected into each zone during stimulation to gain knowledge of the stimulation efficiency. During well start up, clean out, transient and post transient production periods extensive sampling programs were executed. As a result, sufficient data has been acquired in order to complete reservoir characterization analysis together with traditional Pressure Transient Analysis (PTA), and then followed by production optimization. The acquired tracer data and interpretation has been compared with conventional PLT interpretation to verify the former. This is the first integrated application using permanently installed inflow tracers, injected intra well tracers and pressure data interpretation solution for reservoir characterization and production optimization performed.
- Research Article
40
- 10.1016/j.jnca.2016.09.016
- Nov 19, 2016
- Journal of Network and Computer Applications
Novel fuzzy multi objective DVFS-aware consolidation heuristics for energy and SLA efficient resource management in cloud data centers
- Conference Article
23
- 10.1109/ic2e.2014.8
- Mar 1, 2014
Today, the volume of data in the world has been tremendously increased. Large-scaled and diverse data sets are raising new big challenges of storage, process, and query. Tiered storage architectures combining solid-state drives (SSDs) with hard disk drives (HDDs), become attractive in enterprise data centers for achieving high performance and large capacity simultaneously. However, how to best use these storage resources and efficiently manage massive data for providing high quality of service (QoS) is still a core and difficult problem. In this paper, we present a new approach for automated data movement in multi-tiered storage systems, which lively migrates the data across different tiers, aiming to support multiple service level agreements (SLAs) for applications with dynamic workloads at the minimal cost. Trace-driven simulations show that compared to the no migration policy, LMsT significantly improves average I/O response times, I/O violation ratios and I/O violation times, with only slight degradation (e.g., up to 6% increase in SLA violation ratio) on the performance of high priority applications.
- Research Article
51
- 10.1007/s11227-015-1603-9
- Dec 31, 2015
- The Journal of Supercomputing
Increasing demand for acquiring diverse range of services has led to the establishment of huge energy hungry cloud data centers all around the world. Cloud providers face with major concerns to reduce their energy consumption while ensuring high quality of service based on the Service Level Agreement (SLA). Consolidation is proposed as one of the most effective techniques for online energy saving in cloud environments with dynamic workloads. This paper proposes novel proactive online resource management policies to optimize energy, SLA, and number of migrations in cloud data centers. More precisely, this paper proposes new prediction algorithm for determination of overloaded hosts as well as novel multi-criteria decision making techniques to select virtual machines. The results of simulations using CloudSim simulator shows up to 98.11 % reduction in the output metric which is representative of energy consumption, SLA violation, and number of migrations, in comparison with state of the art.
- Research Article
17
- 10.1016/j.sysarc.2021.102064
- Feb 20, 2021
- Journal of Systems Architecture
ML-driven classification scheme for dynamic interference-aware resource scheduling in cloud infrastructures
- Conference Article
1
- 10.1109/icnsc.2018.8361345
- Mar 1, 2018
Cloud computing provides a promising approach for efficiently managing the performance of servers via advanced resource management, hence it becomes one of the important hotspots in high-performance computing field recently. For the existing performance management solutions of cloud servers, they always show inefficiency issues when dealing with the dynamic and burst web workloads. In this paper, we propose an autonomic performance management of cloud servers, which adopt the linear quadratic Gaussian with stochastic method (LQGwS). In the face of dynamic and burst web workloads, it guarantees the workload balance between different Web applications by adaptively adjusting the amount of resource allocation to each virtual machine. Furthermore, in order to deal with the unknown disturbances in the Web system, the LQGwS describes the Web system as a coupled multiple-input-multiple-output system and uses the Autoregressive moving-average model with exogenous inputs model (ARMAX) firstly, and then constructs the optimal resource allocation scheme based on minimizing an average cost function among a set of models, which are generated according to a Gauss distribution. Through the test of real network load, the results of this experiment on the XEN-based platform show that the proposed control strategy has better performance than existing solutions under dynamical workloads in terms of control accuracy and stability.
- Conference Article
2
- 10.2118/187467-ms
- May 9, 2017
Developing Oil & Gas assets requires planning production on multiple horizons: (1) the long-term production plan includes strategic decisions for technology and recovery strategies to maximize the Net Present Value (NPV) of the project, (2) the mid-term horizon includes the drilling program and reservoir depletion/injection rates and (3) the short-term optimization (real-time production optimization or RTPO) aims to maximize the usage of the existing facilities. In the case of RTPO, both subsurface and surface systems are important, the goal being to maximize the daily production while honoring all operational constraints. RTPO requires a comprehensive integrated model covering the entire production system and an accurate mathematical formulation of the problem. This implies finding an appropriate optimization strategy and solver to find an optimal solution within a reasonable time. Sustainable production optimization solutions also assume continuous model update, maintenance and improvement, as the production system behavior changes over time. In this paper, we develop an integrated model for a complex multi-field asset. The production system includes 12 gas wells, 24 gas-lifted oil wells, 4 gas-injection wells, 4 CO2-injection wells, subsea manifolds, gas pipelines, offshore process facilities and CO2 removal units. Gas production from each field is gathered in a single gas pipeline system connected to a gas processing facility located onshore. Control variables include wellhead pressures, routing of wells, gas lift rates, flaring and re-injection rates. Many capacity, pressure and compositional constraints are considered through the whole production system. The production optimization model including binary variables and non-smooth non-linear functions is rather challenging to solve. Each part of the integrated model is approximated with multidimensional piecewise-linear functions to a desired degree of accuracy. The resulting Mixed Integer Linear Program (MILP) can be solved efficiently with existing commercial solvers. The optimization solution is used to answer different types of challenge: (1) platform start-up, (2) unexpected failure of a gas compressor, (3) maintenance on a group of wells and (4) changing reservoir conditions. Production increase driven by RTPO ranges from 1 to 5 % with no additional CAPEX. The implementation of the production optimization solution is also discussed. The importance of the usability, user training and solution maintenance is highlighted.
- Research Article
1
- 10.1080/16258312.2025.2456451
- Feb 11, 2025
- Supply Chain Forum: An International Journal
This study develops a comprehensive two-stage stochastic mixed-integer linear programming (MILP) model to optimise wood supply chain in the Lithuanian furniture industry. The model addresses critical challenges, including resource scarcity, fluctuating demand, and environmental impact of wood procurement and processing. The objective is to minimise total supply chain costs while incorporating sustainability practices and ensuring operational efficiency. The study uses data sourced from Lithuanian forestry records and industry reports, including variables such as wood volume, quality, and weather conditions. The model integrates advanced optimisation techniques, including decomposition algorithms, to manage computational complexity under uncertainty. Key findings reveal significant improvements in cost efficiency and sustainability compared to traditional models. The proposed approach reduces resource wastage and improves flexibility by accounting for uncertainties in demand and supply, achieving a reduction in computational time and a smaller cost deviation. Policy implications underscore the importance of adopting sustainable and data-driven supply chain practices to align with global trends towards carbon reduction and resource optimisation.
- Research Article
9
- 10.3233/idt-220222
- May 15, 2023
- Intelligent Decision Technologies
The Proliferation of on-demand usage-based IT services, as well as the diverse range of cloud users, have led to the establishment of energy-hungry hefty cloud data centers. Therefore, cloud service providers are striving to reduce energy consumption for cost-saving and environmental sustainability issues of data centers. In this direction, Virtual Machine (VM) consolidated is a widely used approach to optimize hardware resources at cost of performance degradation due to unnecessary migrations. Hence, the motivation of the proposed approach is to minimize energy consumption while maintaining the performance of cloud data centers. This leads to a reduction in the overall cost and an increase in the reliability of cloud service providers. To achieve this goal Predictive Virtual Machine Consolidation (PVMC) algorithm is proposed using the exponential smoothing moving average (ESMA) method. In the proposed algorithm, the ratio of deviation to utilization is calculated for VM selection and placement. migrating the high CPU using VMs or we can restrict steady resource-consuming VMs from migration. The outcomes of the proposed algorithm are validated on computer-based simulation under a dynamic workload and a variable number of VMs (1–290). The experimental results show an improvement in the mean threshing index (40%, 45%) and instruction energy ratio (15%, 17%) over the existing policies. Hence, the proposed algorithm could be used in real-world data centers for reducing energy consumption while maintaining low service level agreement violations.
- Conference Article
3
- 10.2118/143665-ms
- Apr 19, 2011
This paper explores the results of one operator's successful implementation of a workflow-based solution for real-time Production Surveillance and Optimization of a large on-shore field in Mexico. We will also examine the fundamental aspects of large-scale workflow delivery in a real-world environment – in this case, involving over 200 wells producing over 200,000 STBD, 50+ users, multiple and often conflicting sources of data (ex. Field vs. SCADA), and integration of production modeling applications from multiple vendors. The solution combines automation of production modeling workflows, real-time monitoring, Surveillance-by-Exception, and virtual integration of 8 data sources – including real-time, operational, well test, field data, and others – into a single and user-friendly environment. It has been designed with the asset in mind, and has delivered their goals of: Maximize total oil production,Minimize downtime,Virtual Well Flowrate Metering,Well performance overview, providing a single point-of-access to all available information,Standardize engineering processes across the asset, with a heavy emphasis on automation and analytics. The "intangible" benefits of standardizing processes and providing readily-available access to information are seen as key enablers of the more directly measurable economic benefits. Prior to the implementation of this solution, it was demonstrated that a significant portion of the asset engineers' time was spent on gathering data from multiple sources. Such activities were manual and error-prone, and led to data duplication, data inaccuracies, and - most importantly - potential for decisions to be made based on incomplete information. The new solution addresses all of these challenges. Integrated reporting and surveillance-by-exception methods ensure that engineers' time is now focused on engineering issues – not data gathering. "Live links" to all sources ensures that data duplication is no longer an issue. And workflow-based automation have brought the various pre-existing Well, Production Network, and Reservoir modeling tools "Online", so they can be incorporated into daily operational activities and decision-making. The work is significant as a successful implementation of a large-scale production optimization solution configured to established operator best practices. In an age when operators are faced with declining production rates, smaller discoveries, and a "graying" workforce, such solutions allow operators to do more with less and deliver maximum asset value.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.