Published in last 50 years
Articles published on Energy Efficient Deployment
- Research Article
- 10.1145/3749463
- Sep 3, 2025
- Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies
- Lei Wang + 7 more
Speech enhancement can greatly improve the user experience during phone calls in low signal-to-noise ratio (SNR) scenarios. In this paper, we propose a low-cost, energy-efficient, and environment-independent speech enhancement system, namely AccCall, that improves phone call quality using the smartphone's built-in accelerometer. However, a significant gap remains between the underlying insight and its practical applications, as several critical challenges should be addressed, including efficiency of speech enhancement in cross-user scenario, adaptive system triggering to reduce energy consumption, and lightweight deployment for real-time processing. To this end, we first design Acc-Aided Network (AccNet), a cross-modal deep learning model inherently capable of cross-user generalization through three key components, including cross-modal fusion module, accelerometer-aided (abbreviated as acc-aided) mask generator, the unified loss function. Second, we adopt a machine learning-based approach instead of deep learning to achieve high accuracy in distinguishing call activity states followed by adaptive system triggering, ensuring lower energy consumption and efficient deployment on mobile platforms. Finally, we propose a knowledge-distillation-driven structured pruning framework that optimizes model efficiency while preserving performance. Extensive experiments with 20 participants have been conducted under a user-independent scenario. The results show that AccCall achieves excellent and reliable adaptive triggering performance, and enables substantial real-time improvements in SISDR, SISNR, STOI, PESQ, and WER, demonstrating the superiority of our system in enhancing speech quality and intelligibility for phone calls.
- Research Article
- 10.1007/s12273-025-1335-6
- Sep 1, 2025
- Building Simulation
- Zhe Wang + 6 more
Abstract Optimizing building energy systems based on real-time occupant behavior and feedback can lead to improved energy efficiency and enhanced thermal comfort in buildings. Traditional thermal comfort surveys do not provide real-time insights, while conventional sensors, such as thermal sensors, are limited in their ability to capture continuous, detailed occupancy data. Meanwhile, deep learning and computer vision have emerged as promising approaches for real-time occupancy behavior detection, but existing artificial intelligence (AI) models suffer from low frame rates and high computational demands, which can lead to increased energy consumption for processing, potentially offsetting the energy savings achieved through occupant-responsive control. Thus, this study developed a novel occupant thermal adaptation behavior recognition model that balances accuracy, real-time performance and computational resource usage to enable effective operation indoors. Using a multi-camera setup with Raspberry Pi 3B+, a custom dataset comprising 400 video samples was collected from four different angles. The dataset captures four distinct human activities: dressing, undressing, sitting, and standing. Compared to SlowFast (SF) and Spatial Temporal Graph Convolutional Networks (ST-GCN), which are widely used deep learning architectures for action recognition, the proposed novel lightweight skeletal temporal model achieved good accuracy (0.975 accuracy) on the Kungliga Tekniska Högskolan (KTH) dataset while significantly outperforming them in detection speed and resource efficiency. It reached 31.38 FPS by running on the graphics processing unit (GPU)—over three times faster than ST-GCN with OpenPose and more than twelve times faster than SF with You Only Look Once Version X (YOLOX)—while maintaining low central processing unit (CPU) and GPU usage at 13.71% and 33.05%, respectively. By running it on the CPU, it achieved 25.3 FPS with 56.10% CPU usage, proving its practicality for platforms without GPU support. When evaluated on the custom dataset, we introduced a double long short-term memory (LSTM) with an attention mechanism to better handle the increased action complexity, preserving a high accuracy of 0.963. Although the frame rate experienced a slight reduction compared to the results on the KTH dataset—dropping from 31.38 to 30.95 FPS on GPU and from 25.3 to 18.98 FPS on CPU—the model exhibited lower CPU and GPU usage, highlighting its potential for energy-efficient deployment in smart building applications. The model was further deployed on an NVIDIA Jetson Orin Nano, enabling stable long-term operation and supporting simultaneous multi-person recognition. Overall, this study presents a practical, AI-driven solution for occupant thermal adaptation behavior recognition, effectively balancing accuracy, real-time performance, and computational efficiency—making it well suited for energy-saving applications in buildings that adapt dynamically to occupant behavior.
- Research Article
- 10.64252/s5xrft04
- Jul 2, 2025
- International Journal of Environmental Sciences
- Shikha Gour + 2 more
Modern, state-of-the-art developments in communications infrastructure often lead to load optimization and energy savings when paired with the architectural resources of WSNs and multi-objective optimization. Separating issues with WSN design, routing, an energy-efficient deployment strategy, and multi-objective optimization is essential. By looking at the building method and cluster gateways with different goals in mind, we can demonstrate the load calculation procedure. Our clustering, Gateway discovery management, load calculation, and load relocation design technique is based on the input variables, anticipated output, objectives, and limitations of wireless sensor networks. Next, we'll put the cluster gateway into action and examine the choices made for traffic optimization and distribution that followed. Optimal load management in wireless sensor networks is problematic due to several constraints. Wireless sensor network multi-objective optimization might use a cluster-based load distribution approach to accommodate for heterogeneous networks, for example, by spreading an ongoing gateways transmission throughout cluster nodes, Collaborative wireless sensor network protocol built on the LEACH architecture
- Research Article
- 10.3389/frai.2025.1590599
- Jun 19, 2025
- Frontiers in Artificial Intelligence
- Houmem Slimi + 3 more
Alzheimer’s disease (AD) is a progressive neurodegenerative disorder characterized by cognitive decline and structural brain alterations such as cortical atrophy and hippocampal degeneration. Early diagnosis remains challenging due to subtle neuroanatomical changes in early stages. This study proposes a hybrid convolutional neural network-spiking neural network (CNN-SNN) architecture to classify AD stages using structural MRI (sMRI) data from the Alzheimer’s Disease Neuroimaging Initiative (ADNI). The model synergizes CNNs for hierarchical spatial feature extraction and SNNs for biologically inspired temporal dynamics processing. The CNN component processes image slices through convolutional layers, batch normalization, and dropout, while the SNN employs leaky integrate-and-fire (LIF) neurons across 25 time steps to simulate temporal progression of neurodegeneration—even with static sMRI inputs. Trained on a three-class task [AD, mild cognitive impairment (MCI), and cognitively normal (CN) subjects], the hybrid network optimizes mean squared error (MSE) loss with L2 regularization and Adam, incorporating early stopping to enhance generalization. Evaluation on ADNI data demonstrates robust performance, with training/validation accuracy and loss tracked over 30 epochs. Classification metrics (precision, recall, F1-score) highlight the model’s ability to disentangle complex spatiotemporal patterns in neurodegeneration. Visualization of learning curves further validates stability during training. An ablation study demonstrates the SNN’s critical role, with its removal reducing accuracy from 99.58 to 75.67%, underscoring the temporal module’s importance. The SNN introduces architectural sparsity via spike-based computation, reducing overfitting and enhancing generalization while aligning with neuromorphic principles for energy-efficient deployment. By bridging deep learning with neuromorphic principles, this work advances AD diagnostic frameworks, offering a computationally efficient and biologically plausible approach for clinical neuroimaging. The results underscore the potential of hybrid CNN-SNN architectures to improve early detection and stratification of neurodegenerative diseases, paving the way for future applications in neuromorphic healthcare systems.
- Research Article
- 10.1109/jiot.2025.3551498
- Jun 15, 2025
- IEEE Internet of Things Journal
- Yiying Zhang
Energy-Efficient Deployment and Offloading Strategy in a Multi-AAV-Assisted MEC System
- Research Article
- 10.3390/s25123648
- Jun 11, 2025
- Sensors (Basel, Switzerland)
- Gyeonghyeon Min + 1 more
In unmanned aerial vehicle (UAV)-mounted base station (MBS) networks, user equipment (UE) experiences dynamic channel variations because of the mobility of the UAV and the changing weather conditions. In order to overcome the degradation in the quality of service (QoS) of the UE due to channel variations, it is important to appropriately determine the three-dimensional (3D) position and transmission power of the base station (BS) mounted on the UAV. Moreover, it is also important to account for both geographical and meteorological factors when deploying UAV-MBSs because they service ground UE in various regions and atmospheric environments. In this paper, we propose an energy-efficient UAV-MBS deployment scheme in multi-UAV-MBS networks using a hybrid improved simulated annealing-particle swarm optimization (ISA-PSO) algorithm to find the 3D position and transmission power of each UAV-MBS. Moreover, we developed a simulator for deploying UAV-MBSs, which took the dynamic weather conditions into consideration. The proposed scheme for deploying UAV-MBSs demonstrated superior performance, where it achieved faster convergence and higher stability compared with conventional approaches, making it well suited for practical deployment. The developed simulator integrates terrain data based on geolocation and real-time weather information to produce more practical results.
- Research Article
3
- 10.1080/08874417.2025.2483832
- Apr 26, 2025
- Journal of Computer Information Systems
- Laurie Hughes + 19 more
ABSTRACT The emergence of AI agents and agentic systems represents a significant milestone in artificial intelligence, enabling autonomous systems to operate, learn, and collaborate in complex environments with minimal human intervention. This paper, drawing on multi-expert perspectives, examines the potential of AI agents and agentic systems to reshape industries by decentralizing decision-making, redefining organizational structures, and enhancing cross-functional collaboration. Specific applications include healthcare systems capable of creating adaptive treatment plans, supply chain agents that predict and address disruptions in real-time, and business process automation that reallocates tasks from humans to AI, improving efficiency and innovation. However, the integration of these systems raises critical challenges, including issues of attribution and shared accountability in decision-making, compatibility with legacy systems, and addressing biases in AI-driven processes. The paper concludes that while agentic systems hold immense promise, robust governance frameworks, cross-industry collaboration, and interdisciplinary research into ethical design are essential. Future research should explore adaptive workforce reskilling strategies, transparent accountability mechanisms, and energy-efficient deployment models to ensure ethical and scalable implementation.
- Research Article
2
- 10.1109/tpami.2024.3483654
- Feb 1, 2025
- IEEE transactions on pattern analysis and machine intelligence
- Zhehui Wang + 5 more
Large language models (LLMs) have garnered substantial attention due to their promising applications in diverse domains. Nevertheless, the increasing size of LLMs comes with a significant surge in the computational requirements for training and deployment. Memristor crossbars have emerged as a promising solution, which demonstrated a small footprint and remarkably high energy efficiency in computer vision (CV) models. Memristors possess higher density compared to conventional memory technologies, making them highly suitable for effectively managing the extreme model size associated with LLMs. However, deploying LLMs on memristor crossbars faces three major challenges. First, the size of LLMs increases rapidly, already surpassing the capabilities of state-of-the-art memristor chips. Second, LLMs often incorporate multi-head attention blocks, which involve non-weight stationary multiplications that traditional memristor crossbars cannot support. Third, while memristor crossbars excel at performing linear operations, they are not capable of executing complex nonlinear operations in LLM such as softmax and layer normalization. To address these challenges, we present a novel architecture for the memristor crossbar that enables the deployment of state-of-the-art LLM on a single chip or package, eliminating the energy and time inefficiencies associated with off-chip communication. Our testing on BERT showed negligible accuracy loss. Compared to traditional memristor crossbars, our architecture achieves enhancements of up to in area overhead and in energy consumption. Compared to modern TPU/GPU systems, our architecture demonstrates at least a reduction in the area-delay product and a significant 69% energy consumption reduction.
- Research Article
- 10.1007/s41870-024-02333-8
- Jan 24, 2025
- International Journal of Information Technology
- R Gayathri + 1 more
VORONOI-KHHS approach coupled with EESC protocol for energy-efficient deployment and secure communication in WSNS
- Research Article
- 10.3390/electronics14030445
- Jan 23, 2025
- Electronics
- Tian Liu + 7 more
As the global population grows, vertical farming offers a promising solution by using vertically stacked shelves in controlled environments to grow crops efficiently within urban areas. However, the shading effects of farm structures make artificial lighting a significant cost, accounting for approximately 67% of total operational expenses. This study presents a novel approach to optimizing the deployment of laser illumination in rotating vertical farms by incorporating structural design, light modeling, and photosynthesis. By theoretically analyzing the beam pattern of laser diodes and the dynamics in the coverage area of rotating farm layers, we accurately characterize the light conditions on each vertical layer. Based on these insights, we introduce a new criterion, cumulative coverage, which accounts for both light intensity and coverage area. Then, an optimization framework is formulated, and a swarm intelligence algorithm, Differential Evolution (DE) is used to solve the optimization while considering the structural and operational constraints. It is found that tilting lights and placing them slightly off-center are more effective than traditional vertically aligned and center-aligned deployment. Our results show that the proposed strategy improves light coverage by 4% compared to the intensity-only optimization approach, and by 10% compared to empirical methods. This study establishes the first theoretical framework for designing energy-efficient artificial lighting deployment strategies, providing insights into enhancing the efficiency of vertical farming systems.
- Research Article
5
- 10.1109/tcad.2024.3443718
- Nov 1, 2024
- IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
- Moritz Scherer + 7 more
With the rise of Embodied Foundation Models (EFMs), most notably Small Language Models (SLMs), adapting Transformers for edge applications has become a very active field of research. However, achieving end-to-end deployment of SLMs on microcontroller (MCU)-class chips without high-bandwidth off-chip main memory access is still an open challenge. In this paper, we demonstrate high-efficiency end-to-end SLM deployment on a multicore RISC-V (RV32) MCU augmented with ML instruction extensions and a hardware neural processing unit (NPU). To automate the exploration of the constrained, multi-dimensional memory vs. computation tradeoffs involved in aggressive SLM deployment on heterogeneous (multicore+NPU) resources, we introduce Deeploy, a novel Deep Neural Network (DNN) compiler, which generates highly-optimized C code requiring minimal runtime support. We demonstrate that Deeploy generates end-to-end code for executing SLMs, fully exploiting the RV32 cores' instruction extensions and the NPU: We achieve leading-edge energy and throughput of 490 µJ/Token, at 340 Token/s for an SLM trained on the TinyStories dataset, running for the first time on an MCU-class device without external memory.
- Research Article
4
- 10.1109/jiot.2024.3404666
- Sep 15, 2024
- IEEE Internet of Things Journal
- Heng Wen + 7 more
Power-Control-Based Energy-Efficient Deployment for Underwater Wireless Sensor Networks With Asymmetric Links
- Research Article
2
- 10.1109/tgcn.2024.3422393
- Sep 1, 2024
- IEEE Transactions on Green Communications and Networking
- Huan Li + 7 more
Energy-Efficient Deployment and Resource Allocation for O-RAN-Enabled UAV-Assisted Communication
- Research Article
3
- 10.3390/electronics13061151
- Mar 21, 2024
- Electronics
- Haowen Wu + 4 more
This paper presents a novel deep graph-based learning technique for speech emotion recognition which has been specifically tailored for energy efficient deployment within humanoid robots. Our methodology represents a fusion of scalable graph representations, rooted in the foundational principles of graph signal processing theories. By delving into the utilization of cycle or line graphs as fundamental constituents shaping a robust Graph Convolution Network (GCN)-based architecture, we propose an approach which allows the capture of relationships between speech signals to decode intricate emotional patterns and responses. Our methodology is validated and benchmarked against established databases such as IEMOCAP and MSP-IMPROV. Our model outperforms standard GCNs and prevalent deep graph architectures, demonstrating performance levels that align with state-of-the-art methodologies. Notably, our model achieves this feat while significantly reducing the number of learnable parameters, thereby increasing computational efficiency and bolstering its suitability for resource-constrained environments. This proposed energy-efficient graph-based hybrid learning methodology is applied towards multimodal emotion recognition within humanoid robots. Its capacity to deliver competitive performance while streamlining computational complexity and energy efficiency represents a novel approach in evolving emotion recognition systems, catering to diverse real-world applications where precision in emotion recognition within humanoid robots stands as a pivotal requisite.
- Research Article
5
- 10.1016/j.adhoc.2024.103463
- Mar 2, 2024
- Ad Hoc Networks
- Kyungho Ryu + 1 more
Energy efficient deployment of aerial base stations for mobile users in multi-hop UAV networks
- Research Article
8
- 10.1109/ojcoms.2023.3343665
- Jan 1, 2024
- IEEE Open Journal of the Communications Society
- Hussam Ibraiwish + 2 more
Energy Efficient Deployment of VLC-Enabled UAV Using Particle Swarm Optimization
- Research Article
4
- 10.1007/s00500-023-09498-7
- Dec 28, 2023
- Soft Computing
- Mayank Namdev + 2 more
The speci c features of UAV like energy e ciency, dynamic structure and mobility in Flying Ad-hoc Network (FANET) is good utilized for the selection of UAV in a vast variety of applications like disaster management, rescue management, medical, and military. So, the effectiveness of FANET is reduced by enhancing the packet transmission expenses due uncreative and indecisive communication among UAVs in multidimensional environment. The optimal path selection will support to pro cient message transmission and offer energy competent and protected communication above FANET. Thus, an Improved Honey Badger Optimization based Communication Approach (IHBO_CA) is implemented for optimal path selection among UAVs above FANET. The sinusoidal chaotic map is combined with honey badger algorithm to improve the performance of optimized communication. The IHBO_CA is implemented using MATLAB 2021a platform and outcomes demonstrate the greater effectiveness of IHBO_CA against OLSR, MP-OLSR, ACO, PSO and HBA depending on energy expenditure, overhead, time complexity, packet delivery ratio and delay.
- Research Article
11
- 10.1109/tits.2022.3198834
- Jul 1, 2023
- IEEE Transactions on Intelligent Transportation Systems
- Peng Yu + 7 more
With the development of 5G/6G networks, the number of wireless users is growing exponentially, and the application scenarios are increasingly diversified. Using unmanned aerial vehicles as base stations (UAV-BSs) to serve ground users has become a trend for wide area coverage and capacity enhancement for rapid access of service in 6G networks. However, as UAV-BSs have limited energy or battery storage, solutions to optimize energy efficiency while providing high-quality services are necessary. Therefore, this paper mainly concentrates on the energy-efficient deployment of coverage-aimed UAV-BSs (Co-UAV-BSs) and capacity-aimed UAV-BSs (Ca-UAV-BSs) for the coverage and capacity enhancement of ground communication under disaster areas or burst data traffic. First, Co-UAV-BSs are deployed with DQN algorithm to to get the UAV-BSs’ optimal flight paths, which mainly adopted to detect out of service users in such areas. Then the users are completely clustered based on the detection results. After that, Co-UAV-BSs and Ca-UAV-BSs are deployed hierarchically based on the user distribution and sought to optimize the energy efficiency with acceptable user services. Still, DQN algorithm and the A3C algorithm are used for obtaining all the UAV-BSs’ location deployment and users’ best connections. The simulation results show that the dynamic flying path requires less energy than the fixed path for user detecting. For the coverage and capacity enhancement, it reveals the solution we proposed could provide high-quality service for users with high energy efficiency comparing to traditional algorithms.
- Research Article
4
- 10.1109/tcad.2022.3216546
- Jul 1, 2023
- IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
- Xiaoxuan Yang + 5 more
Processing-in-memory (PIM) enables energyefficient deployment of convolutional neural networks (CNNs) from edge to cloud. Resistive random-access memory (ReRAM) is one of the most commonly used technologies for PIM architectures. One of the primary limitations of ReRAM-based PIM in neural network training arises from the limited write endurance due to the frequent weight updates. To make ReRAM-based architectures viable for CNN training, the write endurance issue needs to be addressed. This work aims to reduce the number of weight reprogrammings without compromising the final model accuracy. We propose the ESSENCE framework with an endurance-aware structured stochastic gradient pruning method, which dynamically adjusts the probability of gradient update based on the current update counts. Experimental results with multiple CNNs and datasets demonstrate that the proposed method can extend ReRAM’s life time for training. For instance, with the ResNet20 network and CIFAR-10 dataset, ESSENCE can save the mean update counts of up to 10.29× compared to the SGD method and effectively reduce the maximum update counts compared with No Endurance method. Furthermore, aggressive tuning method based on ESSENCE can boost the mean update count savings by up to 14.41×.
- Research Article
9
- 10.1016/j.comnet.2023.109854
- Jun 7, 2023
- Computer Networks
- Attai Ibrahim Abubakar + 9 more
Recently, the use of unmanned aerial vehicles (UAVs) for wireless communications has attracted much research attention. However, most applications of UAVs for wireless communication provisioning are not feasible as researchers fail to consider some vital aspects of their deployment, especially the energy requirements of both the UAV and communication system. The considerable energy consumption overhead involved in flying or hovering UAVs makes them less appealing for green wireless communications. Therefore, in this work, we examine the feasibility of an alternative energy-efficient deployment scheme where UAVs can be made to land-on designated locations, also known as landing stations (LSs). The idea of LS makes the UAV-based wireless communication more durable and advantageous, since the total energy consumption is reduced by minimizing the flying/hovering energy consumption, which, in turn, enables diverse set of applications including emergency and pop-up networking. We evaluate the impact of the separation distance between these LSs and the Optimal Hovering Position (OHP) on the network performance. Specifically, we develop mathematical frameworks to model the relationship between UAV power consumption, coverage probability, throughput, and separation distance. Numerical results reveal that a significant energy reduction can be achieved when the LS concept is exploited with a slight compromise in coverage probability and throughput. However, the choice of a suitable LS location depends on the users’ service requirements, transmit power, and frequency band utilized.