• All Solutions All Solutions Caret
    • Editage

      One platform for all researcher needs

    • Paperpal

      AI-powered academic writing assistant

    • R Discovery

      Your #1 AI companion for literature search

    • Mind the Graph

      AI tool for graphics, illustrations, and artwork

    • Journal finder

      AI-powered journal recommender

    Unlock unlimited use of all AI tools with the Editage Plus membership.

    Explore Editage Plus
  • Support All Solutions Support
    discovery@researcher.life
Discovery Logo
Paper
Search Paper
Cancel
Ask R Discovery Chat PDF
Explore

Feature

  • menu top paper My Feed
  • library Library
  • translate papers linkAsk R Discovery
  • chat pdf header iconChat PDF
  • audio papers link Audio Papers
  • translate papers link Paper Translation
  • chrome extension Chrome Extension

Content Type

  • preprints Preprints
  • conference papers Conference Papers
  • journal articles Journal Articles

More

  • resources areas Research Areas
  • topics Topics
  • resources Resources

Cloud Resources Research Articles

  • Share Topic
  • Share on Facebook
  • Share on Twitter
  • Share on Mail
  • Share on SimilarCopy to clipboard
Follow Topic R Discovery
By following a topic, you will receive articles in your feed and get email alerts on round-ups.
Overview
3027 Articles

Published in last 50 years

Related Topics

  • Cloud Providers
  • Cloud Providers
  • IaaS Cloud
  • IaaS Cloud
  • Cloud Environment
  • Cloud Environment
  • Cloud Users
  • Cloud Users
  • Cloud Broker
  • Cloud Broker

Articles published on Cloud Resources

Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
2967 Search results
Sort by
Recency
ДОСЛІДЖЕННЯ ЕФЕКТИВНОСТІ ВИКОРИСТАННЯ ЦЕНТРАЛІЗОВАНОГО СХОВИЩА КОНФІГУРАЦІЙ ДЛЯ БЕЗПЕЧНОГО КЕРУВАННЯ ІНФРАСТРУКТУРОЮ ХМАРНИХ СЕРВІСІВ

In the current context of widespread adoption of cloud technologies such as AWS, GCP, and Azure, organizations face challenges in centralized management of cloud resources, including ensuring security standards, monitoring service metrics, optimizing costs, and managing configurations. The main issue lies in the differences in the architecture of services provided by various cloud vendors, which complicates the integration and standardization of processes in multi-cloud environments. This article focuses on analyzing the issues of centralized configuration management using the Configuration Management Database (CMDB) as a single source of truth. The study examines methods of organizing and managing CMDB in public cloud environments, with an emphasis on access management, organizational structures, subscriptions, and cloud resource inventory. Particular attention is paid to developing recommendations for optimizing management processes to improve overall efficiency and security. The practical part of the study involves the integration of the Cherwell system as a CMDB with automated data collection through the Prisma API. This integration allows for the automation of resource inventory, reducing the risk of human errors, improving data accuracy, and ensuring compliance with security standards. Additionally, by centralizing data and analyzing it in Power BI, the study demonstrated the effectiveness of the approach in the context of a multi-cloud environment. The purpose of this study is to develop a scientifically grounded approach to centralized configuration management of cloud infrastructure based on the use of a single data repository for configurations (CMDB). The study includes a detailed analysis of the challenges of cloud configuration management, the features of major cloud providers' services, and their integration into a unified informational model. The primary focus is on developing recommendations for building an efficient configuration management system that considers multi-cloud environments, security requirements, and operational processes. The practical aspect of the study is based on the integration of the Cherwell system as a CMDB with Prisma API to automate data collection in a multi-cloud environment. This integration demonstrated significant advantages, including improved data accuracy, reduced manual work, enhanced information security, and optimized management processes. Thus, the aim of the study is not only to provide a theoretical justification of centralized management methods for cloud resources but also to develop practical recommendations to improve the efficiency and security of configuration management in multi-cloud environments. Keywords: Public cloud environments, configuration management, automation, integration.

Read full abstract
  • Journal IconComputer systems and network
  • Publication Date IconJun 1, 2025
  • Author Icon Y.V Martseniuk
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

"Three Layer Based Intelligent Data Privacy in Cloud Computing"

Abstract: The rapid advancement of cloud computing technology, coupled with the exponential increase in unstructured data, has led to heightened interest and significant progress in cloud storage solutions. However, cloud service providers often lack insights into the data they store and manage globally across their platforms. To address privacy concerns, various encoding technologies have been developed to enhance data protection. This paper proposes a three-tiered security framework for cloud storage that optimally utilizes cloud resources while safeguarding data privacy. Our approach involves segmenting data into multiple components, ensuring that the loss of any single piece results in the loss of the entire dataset. We implement a bucket-based algorithm to secure the data, demonstrating both protection and efficiency within our proposed model. Furthermore, leveraging process intelligence, this algorithm will assess the distribution ratios of data stored across cloud, fog, and local environments. Keywords: Cloud Storage Security, Cloud Storage, Three-Layer Storage Security, Privacy Protection, DLP

Read full abstract
  • Journal IconINTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT
  • Publication Date IconMay 31, 2025
  • Author Icon Mr.Omkar Jeurkar
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Automated Terraform Generation using NLP and Graph-Based Cloud Architecture Visualization

With the rapid advancement of cloud computing technologies, the management and provisioning of cloud infrastructure have become increasingly complex. The adoption of Infrastructure as Code (IaC) tools, such as Terraform, has streamlined cloud resource management. However, the manual creation of Terraform configuration files remains a challenging task that requires significant expertise in both cloud architecture and Terraform syntax. This paper presents an innovative approach to automating Terraform file generation using Natural Language Processing (NLP) and graph-based cloud architecture visualization. Our system enables users to describe their cloud infrastructure using natural language or through a graphical drag-and-drop interface. By integrating topological sorting techniques, our solution ensures the correctness of dependencies within the cloud architecture before generating Terraform files. The experimental results demonstrate that our approach enhances efficiency by reducing configuration time by up to 60%, minimizes human error in complex architectures, and makes Terraform more accessible to users with varying levels of expertise. This research contributes to the growing field of automated cloud infrastructure management by bridging the gap between human-readable descriptions and machine-executable infrastructure code. Index Terms—Terraform, Natural Language Processing, Graph-Based Visualization, Cloud Architecture, Topological Sorting, Infrastructure as Code

Read full abstract
  • Journal IconInternational Journal for Research in Applied Science and Engineering Technology
  • Publication Date IconMay 31, 2025
  • Author Icon Akash Sharma
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Predictive cloud resource management: Developing ml models for accurately predicting workload demands (CPU, memory, network, storage) to enable proactive auto-scaling. AI-driven instance type selection and rightsizing. predicting spot instance interruptio

The modern information technology revolution brought about by cloud computing affects how organizations handle their infrastructure provisioning along with scaling and management. Yet, these organizations continuously fight to maximize cloud resource utilization. Systems that either provide excessive capacity or insufficient resources face the problem of increased expenses together with potential performance deterioration. The paper explores the creation and deployment of machine learning (ML) models that precisely forecast cloud workload requirements for proactive resource management systems. Applications that use workload forecasts to drive auto-scaling improve both elasticity and delay performance. AI systems are also applied to choose instance-type configurations that maintain cost-effective operational alignment with workload patterns. The main objective is to detect spot instance interruptions because these unpredictable disruptions cause problems with critical workloads. The research implements classification alongside time-series models to identify when interruptions occur before taking proactive measures for their mitigation. The paper examines advanced forecasting techniques for cloud spending to enable better financial governance and improved budget planning for organizations. Predictive ML models used with cloud resource management frameworks have established themselves as critical elements that enhance cloud operation efficiency through improved resilience and better cost control. This approach bridges data intelligence with adaptive infrastructure methods and intelligent cloud operations within the current digital transformation environment.

Read full abstract
  • Journal IconWorld Journal of Advanced Research and Reviews
  • Publication Date IconMay 30, 2025
  • Author Icon Raviteja Guntupalli
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Autoscaling cloud resources with real-time metrics

This article comprehensively examines cloud resource autoscaling systems driven by real-time metrics, exploring their theoretical foundations, practical implementations, and emerging challenges. The article analyzes the evolution from static resource allocation to sophisticated dynamic scaling mechanisms that continuously monitor performance indicators and automatically adjust cloud infrastructure to match demand patterns. The article investigates critical performance metrics across computational, network, and application domains that inform scaling decisions, alongside the collection methodologies and temporal analysis techniques that transform raw data into actionable intelligence. The article identifies distinctive capabilities and limitations that influence adoption decisions. The article further evaluates performance assessment methodologies, cost-performance tradeoffs, and responsiveness characteristics across diverse application types. Finally, the article addresses pressing challenges in multi-dimensional resource optimization, containerized and serverless environments, edge computing contexts, and sustainability integration, concluding with an outlook on emerging technologies that promise increasingly autonomous and business-aligned scaling capabilities. This article contributes to both the theoretical understanding and practical application of autoscaling technologies in modern cloud environments.

Read full abstract
  • Journal IconWorld Journal of Advanced Research and Reviews
  • Publication Date IconMay 30, 2025
  • Author Icon Bhanu Kiran Kaithe
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

AI-driven cloud optimization: Leveraging machine learning for dynamic resource allocation

This research paper explores the application of artificial intelligence (AI) and machine learning (ML) techniques in optimizing cloud resource allocation. The study investigates how AI-driven approaches can enhance the efficiency and effectiveness of cloud computing systems through dynamic resource allocation. We present a comprehensive review of existing methodologies, propose novel algorithms, and conduct extensive experiments to validate the effectiveness of our approach. The results demonstrate significant improvements in resource utilization, cost reduction, and overall system performance compared to traditional static allocation methods.

Read full abstract
  • Journal IconWorld Journal of Advanced Engineering Technology and Sciences
  • Publication Date IconMay 30, 2025
  • Author Icon Manoj Bhoyar
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

GRAPEVNE - Graphical Analytical Pipeline Development Environment for Infectious Diseases

The increase in volume and diversity of relevant data on infectious diseases and their drivers provides opportunities to generate new scientific insights that can support ‘real-time’ decision-making in public health across outbreak contexts and enhance pandemic preparedness. However, utilising the wide array of clinical, genomic, epidemiological, and spatial data collected globally is difficult due to differences in data preprocessing, data science capacity, and access to hardware and cloud resources. To facilitate large-scale and routine analyses of infectious disease data at the local level (i.e. without sharing data across borders), we developed GRAPEVNE (Graphical Analytical Pipeline Development Environment), a platform enabling the construction of modular pipelines designed for complex and repetitive data analysis workflows through an intuitive graphical interface. Built on the Snakemake workflow management system, GRAPEVNE streamlines the creation, execution, and sharing of analytical pipelines. Its modular approach already supports a diverse range of scientific applications, including genomic analysis, epidemiological modeling, and large-scale data processing. Each module in GRAPEVNE is a self-contained Snakemake workflow, complete with configurations, scripts, and metadata, enabling interoperability. The platform’s open-source nature ensures ongoing community-driven development and scalability. GRAPEVNE empowers researchers and public health institutions by simplifying complex analytical workflows, fostering data-driven discovery, and enhancing reproducibility in computational research. Its user-driven ecosystem encourages continuous innovation in biomedical and epidemiological research but is applicable beyond that. Key use-cases include automated phylogenetic analysis of viral sequences, real-time outbreak monitoring, forecasting, and epidemiological data processing. For instance, our dengue virus pipeline demonstrates end-to-end automation from sequence retrieval to phylogeographic inference, leveraging established bioinformatics tools which can be deployed to any geographical context. For more details, see documentation at: https://grapevne.readthedocs.io

Read full abstract
  • Journal IconWellcome Open Research
  • Publication Date IconMay 27, 2025
  • Author Icon John-Stuart Brittain + 13
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

Explorando o Redirecionamento de Streaming de Vídeo em Infraestruturas de Borda-Nuvem para Usuários Móveis

ResumoVideo streaming has become one of the most popular applicationsin recent years. With the exponential increase in demand forcontent, the development of adaptive solutions that balance theefficient use of computational resources while maintaining a highquality of experience for users, especially mobile users, has becomeessential. In this context, the integration between edge and cloudinfrastructures emerges as a promising approach for deliveringhigh-quality video as the user moves. This integration encompassesa wide range of devices, from mobile equipment to servers in datacenters, including intermediary servers based on fog computing,many of which are deployed near 4G/5G base stations. This workproposes to investigate rule-based autonomic computing strategiesfor the dynamic adaptation of streaming video services to betterutilize computational resources in edge and cloud infrastructures.Specifically, it explores the content-steering architecture, implementedby the Dynamic Adaptive Streaming over HTTP (DASH)protocol, as a solution to optimize streaming video, focusing onenhancing the quality of experience for mobile users.

Read full abstract
  • Journal IconAnais do Computer on the Beach
  • Publication Date IconMay 27, 2025
  • Author Icon Eduardo Cristaldo Panizzon + 2
Just Published Icon Just Published
Cite IconCite
Chat PDF IconChat PDF
Save

A Nucleolus-Based Approach for Cloud Resource Allocation

Cloud computing has transformed organizational operations by enabling flexible resource allocation and reducing upfront hardware investments. However, the growing complexity of resource management, particularly for computing instances, has led to challenges in cost control and resource allocation. Fair allocation policies, such as max-min fairness and Dominant Resource Fairness, aim to distribute resources fairly among users. In recent years, the FinOps framework has emerged to address cloud cost management, empowering teams to manage their own resource usage and budgets. The allocation of resources among competing product teams within an organization can be modelled as a cooperative game, where teams with competing priorities must negotiate resource allocation based on their claims and the available budget.The article explores cloud resource allocation as a cooperative game, particularly in situations where the total budget is insufficient to meet all teams’ demands. Several resource allocation methods are discussed, including the proportional rule and the nucleolus-based approach, which seeks to minimize the coalitions’ incentives to deviate. The nucleolus method offers a stable and fair solution by distributing resources in a way that maximizes stability and reduces the likelihood of coalitions deviating from the overall allocation. This approach ensures that no team is allocated more than its claim and maintains fairness by adhering to principles such as claim boundaries, monotonicity, and resource constraints. Ultimately, the nucleolus-based method is proposed as an effective solution for allocating cloud resources in a cooperative and stable manner, ensuring that resource allocation is both fair and efficient.

Read full abstract
  • Journal IconNaUKMA Research Papers. Computer Science
  • Publication Date IconMay 12, 2025
  • Author Icon Bohdan Artiushenko
Cite IconCite
Chat PDF IconChat PDF
Save

Best Practices in Implementing Azure Entra Conditional Access for Multi-Tenant Environments

Azure Entra Conditional Access is a first-class security product that enforces identity and access management policies in multi-tenant environments to implement secure access to the most important resources. Azure Entra lets businesses manage user IDs, enhance the protections, and reduce risks on a hybrid cloud infrastructure through integration with Azure Active Directory. This article discusses the main features, practices, and benefits of Azure Entra Conditional Access that enable the application of granular security policies based on criteria, including user role, device compliance, location, and risk assessment. It describes Conditional Access as a means to increase regulatory compliance across various industries, including finance, healthcare, and government, to name a few, so that organizations can follow each of these standards, such as GDPR, HIPAA, and PCI-DSS. The article also brings up real-time monitoring, incident response workflows, and AI-based adaptive access policies in securing Enterprise environments. The article illustrates how to ensure operational efficiency by safeguarding resources with Azure Entra through case studies and practical recommendations. With the growing popularity of digital transformation, Azure Entra Conditional Access will be a leading force in securing access to cloud and on-premise resources to ensure that businesses can continue to meet the requirements of modern IT security while reducing risk.

Read full abstract
  • Journal IconInternational journal of networks and security
  • Publication Date IconMay 10, 2025
  • Author Icon Pramod Gannavarapu
Cite IconCite
Chat PDF IconChat PDF
Save

AI-Driven Strategies for Cloud Cost Optimization

Cloud infrastructure has become essential for modern enterprises, but managing associated costs presents significant challenges. Organizations struggle with overprovisioning, resource inefficiencies, and unexpected billing spikes, wasting substantial portions of their cloud spend. Artificial intelligence offers powerful solutions by introducing data-driven decision-making into cloud resource management. This article explores five AI-powered strategies for cloud cost optimization: machine learning in predictive cost management, AI-optimized resource allocation and workload auto-scaling, comparative solutions from major providers like Google Cloud's Recommender AI and AWS Compute Optimizer, serverless computing paired with AI, and approaches to overcome challenges in implementation. Organizations implementing these technologies achieve substantial cost reductions while maintaining or improving application performance, demonstrating that AI-driven optimization represents the future of efficient cloud financial management.

Read full abstract
  • Journal IconInternational Journal on Science and Technology
  • Publication Date IconMay 10, 2025
  • Author Icon Anup Raja Sarabu
Cite IconCite
Chat PDF IconChat PDF
Save

A Cloud Computing Framework for Space Farming Data Analysis

This study presents a system framework by which cloud resources are utilized to analyze crop germination status in a 2U CubeSat. This research aims to address the onboard computing constraints in nanosatellite missions to boost space agricultural practices. Through the Espressif Simple Protocol for Network-on-Wireless (ESP-NOW) technology, communication between ESP-32 modules were established. The corresponding sensor readings and image data were securely streamed through Amazon Web Service Internet of Things (AWS IoT) to an ESP-NOW receiver and Roboflow. Real-time plant growth predictor monitoring was implemented through the web application provisioned at the receiver end. On the other hand, sprouts on germination bed were determined through the custom-trained Roboflow computer vision model. The feasibility of remote data computational analysis and monitoring for a 2U CubeSat, given its minute form factor, was successfully demonstrated through the proposed cloud framework. The germination detection model resulted in a mean average precision (mAP), precision, and recall of 99.5%, 99.9%, and 100.0%, respectively. The temperature, humidity, heat index, LED and Fogger states, and bed sprouts data were shown in real time through a web dashboard. With this use case, immediate actions can be performed accordingly when abnormalities occur. The scalability nature of the framework allows adaptation to various crops to support sustainable agricultural activities in extreme environments such as space farming.

Read full abstract
  • Journal IconAgriEngineering
  • Publication Date IconMay 8, 2025
  • Author Icon Adrian Genevie Janairo + 3
Open Access Icon Open Access
Cite IconCite
Chat PDF IconChat PDF
Save

Scalable Cloud Architectures for Real-Time AI: Dynamic Resource Allocation for Inference Optimization

As the demand for Artificial Intelligence applications continues to grow across industries, the need for scalable and flexible cloud architectures has become more pronounced. AI workloads, characterized by diverse resource demands, unpredictable traffic patterns, and fluctuating computational requirements, require cloud architectures capable of dynamically adapting to changing conditions. Traditional static cloud resource allocation models often fail to meet the performance and cost-efficiency needs of AI-driven applications. This work explores the concept of dynamic scaling in cloud architectures and its potential to optimize AI workload performance through adaptive resource allocation. The importance of elastic scaling, auto-scaling mechanisms, and predictive analytics for anticipating workload demands is highlighted. Additionally, the use of containerization, serverless computing, and multi-cloud environments in enhancing the flexibility and efficiency of AI workloads is examined. Through an assessment of various techniques and models, a framework for adaptive cloud architectures is proposed that can optimize resource utilization, reduce operational costs, and improve the overall performance of AI applications.

Read full abstract
  • Journal IconJournal of Computer Science and Technology Studies
  • Publication Date IconMay 8, 2025
  • Author Icon Srinivas Chennupati
Cite IconCite
Chat PDF IconChat PDF
Save

Cloud Resource Optimization System

ABSTRACT - Cloud computing environments encounter considerable challenges in effectively allocating resources due to varying demands from users and applications. As businesses progressively transition workloads to cloud infrastructure, achieving optimal resource utilization becomes essential for maintaining service quality and cost-effectiveness. This paper introduces a Cloud Resource Optimization System, an interactive web application designed to aid users in enhancing cloud resource utilization based on real-time input parameters such as CPU usage, memory usage, disk storage, and task priority. The system performs dynamic analyses of resource consumption and offers customized optimization recommendations to enhance performance, lower costs, and ensure system stability. In contrast to traditional static resource management systems, our model prioritizes interactivity and task-specific guidance through rule- based dynamic analysis. Furthermore, the system improves user decision-making via a visual and user-friendly interface, providing immediate insights for various cloud workloads. This methodology effectively bridges the divide between manual monitoring and automated management by delivering actionable intelligence for proficient cloud resource planning. Our implementation illustrates that adaptive and interactive optimization systems can greatly enhance operational efficiency within cloud computing environments. Our implementation illustrates that adaptive and interactive optimization systems can greatly enhance operational efficiency within cloud computing environments. By providing tailored and accurate recommendations based on real resource usage patterns, the proposed system aids in reducing resource waste, optimizing expenses, and facilitating scalable cloud operations. This paper emphasizes how these interactive solutions can act as a practical bridge between manual cloud resource management and sophisticated automated orchestration tools, thereby making cloud optimization attainable for users without expert knowledge. The system performs real-time analysis of resource usage and offers customized optimization recommendations aimed at enhancing performance, lowering expenses, and maintaining system stability. In contrast to traditional static models, our system prioritizes interactivity and provides task-specific advice through rule-based dynamic analysis. Keywords—Resource optimization system, Data processing, Reinforcement learning

Read full abstract
  • Journal IconInternational Scientific Journal of Engineering and Management
  • Publication Date IconMay 7, 2025
  • Author Icon G Akash
Cite IconCite
Chat PDF IconChat PDF
Save

Improving Cloud Resource Scaling A Comparative Analysis of Automatic Scaling Techniques Using the WASPAS Methodology

Cloud computing has transformed resource management by enabling scalable, on-demand services. Effective cloud management is critical to balancing performance, cost, and resource utilization. This study examines various cloud scaling techniques, including rule-based, predictive, container-based, serverless, and hybrid scaling, and evaluates their effectiveness in optimizing cloud resources while ensuring flexibility and reliability. This research provides a comparative analysis of cloud resource scaling strategies, which helps organizations select the most appropriate approach. By evaluating different scaling techniques, this study improves understanding of efficient cloud resource allocation, reduces costs, and ensures performance optimization. It contributes to cloud computing research by addressing key challenges in resource management. The alternative: Rule-Based Auto-Scaling, Predictive Auto-Scaling with AI/ML, Container-Based Scaling, Serverless Computing, Hybrid Scaling. The evaluation criteria consist of Scalability Efficiency, Resource Utilization, Over-Provisioning Risk, Complexity of Implementation. According to the results, Serverless Computing was ranked highest, while Hybrid Scaling was ranked lowest. Based on the Weighted Aggregated Sum Product Assessment System (WASPAS), serverless computing offers the highest value for cloud management for advanced resource scalability.

Read full abstract
  • Journal IconComputer Science, Engineering and Technology
  • Publication Date IconMay 6, 2025
Cite IconCite
Chat PDF IconChat PDF
Save

Exploring Smartphone-Based Edge AI Inferences Using Real Testbeds.

The increasing availability of lightweight pre-trained models and AI execution frameworks is causing edge AI to become ubiquitous. Particularly, deep learning (DL) models are being used in computer vision (CV) for performing object recognition and image classification tasks in various application domains requiring prompt inferences. Regarding edge AI task execution platforms, some approaches show a strong dependency on cloud resources to complement the computing power offered by local nodes. Other approaches distribute workload horizontally, i.e., by harnessing the power of nearby edge nodes. Many of these efforts experiment with real settings comprising SBC (Single-Board Computer)-like edge nodes only, but few of these consider nomadic hardware such as smartphones. Given the huge popularity of smartphones worldwide and the unlimited scenarios where smartphone clusters could be exploited for providing computing power, this paper sheds some light in answering the following question: Is smartphone-based edge AI a competitive approach for real-time CV inferences? To empirically answer this, we use three pre-trained DL models and eight heterogeneous edge nodes including five low/mid-end smartphones and three SBCs, and compare the performance achieved using workloads from three image stream processing scenarios. Experiments were run with the help of a toolset designed for reproducing battery-driven edge computing tests. We compared latency and energy efficiency achieved by using either several smartphone clusters testbeds or SBCs only. Additionally, for battery-driven settings, we include metrics to measure how workload execution impacts smartphone battery levels. As per the computing capability shown in our experiments, we conclude that edge AI based on smartphone clusters can help in providing valuable resources to contribute to the expansion of edge AI in application scenarios requiring real-time performance.

Read full abstract
  • Journal IconSensors (Basel, Switzerland)
  • Publication Date IconMay 2, 2025
  • Author Icon Matías Hirsch + 2
Cite IconCite
Chat PDF IconChat PDF
Save

Dynamic neighborhood grouping-based multi-objective scheduling algorithm for workflow in hybrid cloud

Dynamic neighborhood grouping-based multi-objective scheduling algorithm for workflow in hybrid cloud

Read full abstract
  • Journal IconFuture Generation Computer Systems
  • Publication Date IconMay 1, 2025
  • Author Icon Yulin Guo + 4
Cite IconCite
Chat PDF IconChat PDF
Save

A Cluster Based Energy efficient Q-Ant Colony Optimization(CEQACO) framework for cloud computing environment

In cloud computing, resource allocation and cloudlet scheduling are fundamental issues when dealing with a medium to large number of tasks. To meet consumer expectations and achieve optimal performance, multiple cloudlets need to be executed simultaneously using available resources, while minimizing makespan and effectively balancing the load. Despite ongoing developments in cloud computing, this technology faces numerous challenges, one of which is task scheduling. Task scheduling involves allocating users' tasks to virtual machines (VMs) to minimize turnaround time and improve resource utilization. This is an NP-hard problem with a runtime complexity of O(mn), making it a challenging task to schedule n tasks on m resources. The process of task scheduling requires exploring a large solution space, and there is a lack of algorithms that can find the optimal solution in polynomial runtime.This paper proposes a Cluster-based Energy Efficient Q-Ant Colony Optimization (CEQACO) framework for cloud computing environments. The framework utilizes clustering techniques to group cloud virtual machines (VMs) based on their workload characteristics and applies a Q-Ant Colony Optimization algorithm to optimize the allocation of VMs to physical servers. The results show that the CEQACO framework can reduce time computation and energy by up to 6%, while still meeting the quality of service requirements of cloud users.

Read full abstract
  • Journal IconJournal of Information Systems Engineering and Management
  • Publication Date IconMay 1, 2025
  • Author Icon M Rupasri
Open Access Icon Open Access
Cite IconCite
Chat PDF IconChat PDF
Save

A variant of particle swarm optimization in cloud computing environment for scheduling workflow applications

<span lang="EN-US">Cloud computing offers on-demand access to shared resources, with user costs based on resource usage and execution time. To attract users, cloud providers need efficient schedulers that minimize these costs. Achieving cost minimization is challenging due to the need to consider both execution and data transfer costs. Existing scheduling techniques often fail to balance these costs effectively. This study proposes a variant of the particle swarm optimization algorithm (VPSO) for scheduling workflow applications in a cloud computing environment. The approach aims to reduce both execution and communication costs. We compared VPSO with several PSO variants, including Inertia-weighted PSO, gaussian disturbed particle swarm optimization (GDPSO), dynamic-PSO, and dynamic adaptive particle swarm optimization with self-supervised learning (DAPSO-SSL). Results indicate that VPSO generally offers significant cost reductions and efficient workload distribution across resources, although there are specific scenarios where other algorithms perform better. VPSO provides a robust and cost-effective solution for cloud workflow scheduling, enhancing task-resource mapping and reducing costs compared to existing methods. Future research will explore further enhancements and additional PSO variants to optimize cloud resource management.</span>

Read full abstract
  • Journal IconIndonesian Journal of Electrical Engineering and Computer Science
  • Publication Date IconMay 1, 2025
  • Author Icon Ashish Tripathi + 6
Cite IconCite
Chat PDF IconChat PDF
Save

Toward a conceptual model to improve the user experience of a sustainable and secure intelligent transport system.

Toward a conceptual model to improve the user experience of a sustainable and secure intelligent transport system.

Read full abstract
  • Journal IconActa psychologica
  • Publication Date IconMay 1, 2025
  • Author Icon Abdullah Alsaleh
Cite IconCite
Chat PDF IconChat PDF
Save

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • .
  • .
  • .
  • 10
  • 1
  • 2
  • 3
  • 4
  • 5

Popular topics

  • Latest Artificial Intelligence papers
  • Latest Nursing papers
  • Latest Psychology Research papers
  • Latest Sociology Research papers
  • Latest Business Research papers
  • Latest Marketing Research papers
  • Latest Social Research papers
  • Latest Education Research papers
  • Latest Accounting Research papers
  • Latest Mental Health papers
  • Latest Economics papers
  • Latest Education Research papers
  • Latest Climate Change Research papers
  • Latest Mathematics Research papers

Most cited papers

  • Most cited Artificial Intelligence papers
  • Most cited Nursing papers
  • Most cited Psychology Research papers
  • Most cited Sociology Research papers
  • Most cited Business Research papers
  • Most cited Marketing Research papers
  • Most cited Social Research papers
  • Most cited Education Research papers
  • Most cited Accounting Research papers
  • Most cited Mental Health papers
  • Most cited Economics papers
  • Most cited Education Research papers
  • Most cited Climate Change Research papers
  • Most cited Mathematics Research papers

Latest papers from journals

  • Scientific Reports latest papers
  • PLOS ONE latest papers
  • Journal of Clinical Oncology latest papers
  • Nature Communications latest papers
  • BMC Geriatrics latest papers
  • Science of The Total Environment latest papers
  • Medical Physics latest papers
  • Cureus latest papers
  • Cancer Research latest papers
  • Chemosphere latest papers
  • International Journal of Advanced Research in Science latest papers
  • Communication and Technology latest papers

Latest papers from institutions

  • Latest research from French National Centre for Scientific Research
  • Latest research from Chinese Academy of Sciences
  • Latest research from Harvard University
  • Latest research from University of Toronto
  • Latest research from University of Michigan
  • Latest research from University College London
  • Latest research from Stanford University
  • Latest research from The University of Tokyo
  • Latest research from Johns Hopkins University
  • Latest research from University of Washington
  • Latest research from University of Oxford
  • Latest research from University of Cambridge

Popular Collections

  • Research on Reduced Inequalities
  • Research on No Poverty
  • Research on Gender Equality
  • Research on Peace Justice & Strong Institutions
  • Research on Affordable & Clean Energy
  • Research on Quality Education
  • Research on Clean Water & Sanitation
  • Research on COVID-19
  • Research on Monkeypox
  • Research on Medical Specialties
  • Research on Climate Justice
Discovery logo
FacebookTwitterLinkedinInstagram

Download the FREE App

  • Play store Link
  • App store Link
  • Scan QR code to download FREE App

    Scan to download FREE App

  • Google PlayApp Store
FacebookTwitterTwitterInstagram
  • Universities & Institutions
  • Publishers
  • R Discovery PrimeNew
  • Ask R Discovery
  • Blog
  • Accessibility
  • Topics
  • Journals
  • Open Access Papers
  • Year-wise Publications
  • Recently published papers
  • Pre prints
  • Questions
  • FAQs
  • Contact us
Lead the way for us

Your insights are needed to transform us into a better research content provider for researchers.

Share your feedback here.

FacebookTwitterLinkedinInstagram
Cactus Communications logo

Copyright 2025 Cactus Communications. All rights reserved.

Privacy PolicyCookies PolicyTerms of UseCareers