• All Solutions All Solutions Caret
    • Editage

      One platform for all researcher needs

    • Paperpal

      AI-powered academic writing assistant

    • R Discovery

      Your #1 AI companion for literature search

    • Mind the Graph

      AI tool for graphics, illustrations, and artwork

    • Journal finder

      AI-powered journal recommender

    Unlock unlimited use of all AI tools with the Editage Plus membership.

    Explore Editage Plus
  • Support All Solutions Support
    discovery@researcher.life
Discovery Logo
Sign In
Paper
Search Paper
Cancel
Pricing Sign In
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Chat PDF iconChat PDF Star Left icon
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link
Discovery Logo menuClose menu
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Chat PDF iconChat PDF Star Left icon
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link

Related Topics

  • Dynamic Memory Management
  • Dynamic Memory Management
  • Memory Allocation
  • Memory Allocation

Articles published on Memory management

Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
2415 Search results
Sort by
Recency
  • New
  • Research Article
  • 10.18203/2394-6040.ijcmph20254043
Exploring study skills and learning approaches among high school students in the southern Karnataka region
  • Nov 29, 2025
  • International Journal Of Community Medicine And Public Health
  • Nidha Fathima + 3 more

Background: Education is a fundamental right, and effective study skills are vital for academic success. Key skills include textbook reading, note-taking, concentration, test preparation, memory, and time management. This study aimed to assess study skills among high school students and examine their relationship with academic performance. Methods: A cross-sectional study was conducted between December 2022 and May 2023 among high school students in Mysuru and Chamarajanagar to assess study skills and learning strategies. A sample of 382 students was selected using probability proportionate to size sampling. Data were collected using a pre-tested questionnaire, including socio-demographics, academic performance, and the Dennis Congos study skills inventory (DCSSI), covering six domains: textbook reading, note-taking, memory, test preparation, concentration, and time management. Domain-specific thresholds identified areas needing improvement. Data analysis was done using SPSS v24 with descriptive and inferential statistics. Ethical approval and informed consent were obtained. Results: Among 382 high school students, most were aged 14-15 years (65.1%) and belonged to the upper-middle socioeconomic class (58.3%). Academic performance showed that 62.9% scored between 61-80%, while only 6% achieved scores above 90%. Students demonstrated strengths in test preparation (98.4%), concentration (66.8%), time management (54.7%), and textbook reading (58.1%), but showed deficits in note-taking (19.6%) and memory skills (12.6%). Textbook reading was significantly associated with academic performance (p=0.005), while time management approached significance (p=0.063). Conclusions: Study highlighted that textbook reading impacts performance; note-taking and memory need improvement.

  • Research Article
  • 10.1146/annurev-control-032724-014418
Going Places: Place Recognition in Artificial and Natural Systems
  • Oct 29, 2025
  • Annual Review of Control, Robotics, and Autonomous Systems
  • Michael Milford + 1 more

Place recognition—the ability to identify previously visited locations—is critical for both biological navigation and autonomous systems. This review synthesizes findings from robotic systems, animal studies, and human research to explore how different systems encode and recall place. We examine the computational and representational strategies employed across artificial systems, animals, and humans, highlighting convergent solutions such as topological mapping, cue integration, and memory management. Animal systems reveal evolved mechanisms for multimodal navigation and environmental adaptation, while human studies provide unique insights into semantic place concepts, cultural influences, and introspective capabilities. Artificial systems showcase scalable architectures and data-driven models. We propose a unifying set of concepts by which to consider and develop place recognition mechanisms and identify key challenges such as generalization, robustness, and environmental variability. This review aims to foster innovations in artificial localization by connecting future developments in artificial place recognition systems to insights from both animal navigation research and human spatial cognition studies.

  • Research Article
  • 10.3390/electronics14214235
LACX: Locality-Aware Shared Data Migration in NUMA + CXL Tiered Memory
  • Oct 29, 2025
  • Electronics
  • Hayong Jeong + 3 more

In modern high-performance computing (HPC) and large-scale data processing environments, the efficient utilization and scalability of memory resources are critical determinants of overall system performance. Architectures such as non-uniform memory access (NUMA) and tiered memory systems frequently suffer performance degradation due to remote accesses stemming from shared data among multiple tasks. This paper proposes LACX, a shared data migration technique leveraging Compute Express Link (CXL), to address these challenges. LACX preserves the migration cycle of automatic NUMA balancing (AutoNUMA) while identifying shared data characteristics and migrating such data to CXL memory instead of DRAM, thereby maximizing DRAM locality. The proposed method utilizes existing kernel structures and data to efficiently identify and manage shared data without incurring additional overhead, and it effectively avoids conflicts with AutoNUMA policies. Evaluation results demonstrate that, although remote accesses to shared data can degrade performance in low-tier memory scenarios, LACX significantly improves overall memory bandwidth utilization and system performance in high-tier memory and memory-intensive workload environments by distributing DRAM bandwidth. This work presents a practical, lightweight approach to shared data management in tiered memory environments and highlights new directions for next-generation memory management policies.

  • Research Article
  • 10.63278/jicrcr.vi.3345
LLM-Optimized Cloud Architectures: Evaluating Infrastructure Patterns For Fine-Tuning And Serving Large Models
  • Oct 17, 2025
  • Journal of International Crisis and Risk Communication Research
  • Satya Teja Muddada

Large Language Models have ignited a paradigm shift in the field of artificial intelligence, but their implementation comes with daunting infrastructure issues that traditional cloud architectures cannot simply address. This article proposes a complete three-layer architecture for special optimization of the entire LLM lifecycle through training, fine-tuning, and inference processes. The suggested design combines distributed GPU orchestration using Kubernetes and Ray, applies parameter-efficient adaptation mechanisms such as Low-Rank Adaptation, and utilizes sophisticated quantization strategies for optimizing inference. The design tackles system bottlenecks in memory, computational, and resource management through rigorous design patterns that facilitate end-to-end scalability across heterogeneous clouds. Experimental verification proves dramatic enhancements to operational performance, as parameter-efficient fine-tuning minimizes computational needs without sacrificing model quality, elastic orchestration improves resource efficiencies through variable workloads, and quantization methods facilitate deployment on hardware with limited resources. The architectural framework offers real-world blueprints for organizations looking to deploy LLM workloads at scale, presenting modular components that translate across various operational requirements at an affordable cost with performance standards ideal for production environments.

  • Research Article
  • 10.1145/3763134
Modal Abstractions for Virtualizing Memory Addresses
  • Oct 9, 2025
  • Proceedings of the ACM on Programming Languages
  • Ismail Kuru + 1 more

Virtual memory management (VMM) code is a critical piece of general-purpose OS kernels, but verification of this functionality is challenging due to the complexity of the hardware interface (the page tables are updated via writes to those memory locations, using addresses which are themselves virtualized). Prior work on verification of VMM code has either only handled a single address space, or trusted significant pieces of assembly code. In this paper, we introduce a modal abstraction to describe the truth of assertions relative to a specific virtual address space: [r]P indicating that P holds in the virtual address space rooted at r. Such modal assertions allow different address spaces to refer to each other, enabling complete verification of instruction sequences manipulating multiple address spaces. Using them effectively requires working with other assertions, such as points-to assertions about memory contents — which implicitly depend on the address space they are used in. We therefore define virtual points-to assertions to definitionally mimic hardware address translation, relative to a page table root. We demonstrate our approach with challenging fragments of VMM code showing that our approach handles examples beyond what prior work can address, including reasoning about a sequence of instructions as it changes address spaces. Our results are formalized for a RISC-like fragment of x86-64 assembly in Rocq.

  • Research Article
  • 10.1145/3764117
From Linearity to Borrowing
  • Oct 9, 2025
  • Proceedings of the ACM on Programming Languages
  • Andrew Wagner + 4 more

Linear type systems are powerful because they can statically ensure the correct management of resources like memory, but they can also be cumbersome to work with, since even benign uses of a resource require that it be explicitly threaded through during computation. Borrowing, as popularized by Rust, reduces this burden by allowing one to temporarily disable certain resource permissions (e.g., deallocation or mutation) in exchange for enabling certain structural permissions (e.g., weakening or contraction). In particular, this mechanism spares the borrower of a resource from having to explicitly return it to the lender but nevertheless ensures that the lender eventually reclaims ownership of the resource. In this paper, we elucidate the semantics of borrowing by starting with a standard linear type system for ensuring safe manual memory management in an untyped lambda calculus and gradually augmenting it with immutable borrows, lexical lifetimes, reborrowing, and finally mutable borrows. We prove semantic type soundness for our Borrow Calculus (BoCa) using Borrow Logic (BoLo), a novel domain-specific separation logic for borrowing. We establish the soundness of this logic using a semantic model that additionally guarantees that our calculus is terminating and free of memory leaks. We also show that our Borrow Logic is robust enough to establish the semantic safety of some syntactically ill-typed programs that temporarily break but reestablish invariants.

  • Research Article
  • 10.1145/3771550
A Thread-level Stream Scheduling Method for Accelerating LVMs' Inference on a Resource-constrained Platform
  • Oct 9, 2025
  • ACM Transactions on Embedded Computing Systems
  • Yijie Chen + 6 more

As a new generation of edge devices, the integrated CPU/GPU architecture has opened up new opportunities for deploying different scale vision models. In order to reduce models’ inference time on the integrated devices, this paper first compresses deep learning models using model quantization. The quantization process greatly reduces the computation requirements of a model, which enables its deployment on embedded development boards. However, quantization also leads to lower GPU resource utilization during inference on integrated devices. This insufficient utilization results in slower inference speed. To address this problem, this paper first depicts the data flow of model inference within an integrated device. Secondly, this paper implements a unified memory management between the CPU and GPU based on managed memory strategy. Finally, this paper designs a thread-level stream scheduling method to improve GPU utilization and throughput during model inference in a pipeline way. Experimental results show that the proposed method achieves a 2x-10x improvement in throughput compared to the TensorRT’s default scheduling method, which is crucial for realizing real-time inference tasks on edge devices.

  • Research Article
  • 10.1145/3771286
GTSM: A multi-edge-centric temporal subgraph matching framework on GPUs
  • Oct 9, 2025
  • ACM Transactions on Architecture and Code Optimization
  • Jiezhong He + 4 more

Temporal subgraph matching aims to identify subgraphs in temporal networks that satisfy both structural and temporal constraints, with applications ranging from social network analysis to fraud detection. As this NP-hard problem involves massive computation on large graphs, GPU acceleration becomes critical. However, existing edge-centric approaches suffer from computational redundancy, inefficient memory management, and limited scalability on large graphs, hindering efficient GPU acceleration. To address these challenges, we propose GTSM 1 , a GPU-optimized temporal subgraph matching system featuring three innovations: (1) A multi-edge-centric paradigm that reduces redundant search space through multi-edge compressions along with an efficient decompression algorithm; (2) A memory-bound optimization that maximizes GPU resource utilization. (3) A heterogeneous BFS-DFS execution model where CPU performs Breadth-First Search (BFS) to ensure load balancing across GPUs. Experiments demonstrate that GTSM achieves a 5.5 × -93.2 × speedup over the state-of-the-art GPU systems, while solving 10%-40% more queries. With our heterogeneous execution model, our system achieves near-linear scaling in multi-GPU configurations.

  • Research Article
  • 10.1145/3769429
A Comprehensive Study on Solving Memory Bloat Under Virtualization
  • Oct 8, 2025
  • ACM Transactions on Computer Systems
  • Chuandong Li + 7 more

Huge pages are effective in reducing address translation overhead under virtualization. However, huge pages can lead to the memory bloat problem, which manifests in two primary forms: hot bloat and usage bloat. Hot Bloat occurs when accesses to a huge page are heavily skewed towards a small subset of base pages, leading the hypervisor to (mistakenly) classify the entire huge page as hot. Hot Bloat undermines several critical virtualization techniques, including tiered memory and page sharing. Usage bloat refers to the base pages within a huge page that has not yet been allocated, causing virtual machines (VMs) to demand excessive memory. Prior work addressing memory bloat either requires hardware modification or targets a specific scenario and is not applicable to a hypervisor. This paper presents HugeScope , a lightweight, effective and generic system that addresses the memory bloat problem under virtualization based on commodity hardware. HugeScope includes an efficient and precise page tracking mechanism, leveraging the other level of indirect memory translation in the hypervisor. HugeScope provides a generic framework to support page splitting and coalescing policies, considering the memory pressure, as well as the recency, frequency, and skewness of page access. Moreover, HugeScope is general and modular. It can not only be easily applied to various scenarios concerning hot bloat, including tiered memory management ( HS-TMM ) and page sharing ( HS-Share ), but also seamlessly expose its capabilities to VMs to address the usage bloat problem ( HS-HP ). Evaluation shows that HugeScope incurs less than 4% overhead, by addressing hot bloat, HS-TMM improves performance by up to 61% over vTMM while HS-Share saves 41% more memory than Ingens while offering comparable performance, and By addressing usage bloat, HS-HP can eliminate excessive memory usage, and achieve performance improvements of up to 11% over HawkEye.

  • Research Article
  • 10.1145/3770759
ISRLUT: Integer-Only FHD Image Super-Resolution based on Neural Lookup Table and Near-Memory Computing
  • Oct 3, 2025
  • ACM Transactions on Reconfigurable Technology and Systems
  • Tianshuo Lu + 5 more

While Deep Neural Networks (DNNs) have achieved remarkable progress in Image Super-Resolution (SR) task, they face significant challenges for edge processing FHD images. Complex DNN operators lead to high hardware resource consumption and latency. Computational inefficiency of FPU increases energy consumption, while DDR access overhead and on-chip memory overflow further constrain real-time capabilities. To address this, we propose ISRLUT, a novel accelerator architecture focused on integer-only inference and near-memory computing. Its core contributions include: 1) Fusion of Neural LUT arithmetic with reconfigurable compute units, transforming unified LUT operators from DNN operators and enhancing hardware utilization; 2) An integer-only inference and parallel architecture, eliminating floating-point dependencies and significantly reducing energy consumption; 3) An innovative internal operator memory management scheme coupled with Tile-based Buffer Overlap and Private Cache Mechanism. We deploy ISRLUT on FPGA and ASIC platforms. Experiments demonstrate that ISRLUT achieves efficient performance: For 4 \(\times\) upscaling, it requires only 36.9KB of storage and achieves a PSNR of 30.21dB on Set5. Hardware implementation using a 55nm ASIC consumes merely 0.0337W power, delivers an energy efficiency of 7278.6 Mpixels/s/W, and achieves a real-time frame rate of 118 FPS for 4 \(\times\) FHD processing, validating its superiority in energy efficiency and hardware utilization.

  • Research Article
  • 10.1016/j.jss.2025.112472
Coding style matters: Scalable and efficient identification of memory management functions in monolithic firmware
  • Oct 1, 2025
  • Journal of Systems and Software
  • Ruijie Cai + 5 more

Coding style matters: Scalable and efficient identification of memory management functions in monolithic firmware

  • Research Article
  • 10.52783/jisem.v10i60s.13044
Unified Framework for Real-Time Big Data Analytics with AI Integration
  • Sep 30, 2025
  • Journal of Information Systems Engineering and Management
  • Gopinath Ramisetty

The intersection of distributed computing technologies with artificial intelligence competencies has revolutionized enterprise analytical environments in ways never before possible, opening doors to unprecedented capabilities for real-time data processing and intelligent decision-making across various industrial segments. Contemporary unified analytical environments integrate in-memory processing engines, serverless data warehouse models, graph-based workflow orchestration systems, and advanced machine learning algorithms to provide end-to-end solutions with the ability to support gigantic datasets and yet respond in sub-second timescales. The unification makes it possible for organizations to handle streaming data workloads with high throughput levels, along with running complex analytical queries on petabyte-scale data repositories simultaneously. Sophisticated distributed computing frameworks harness complex memory management frameworks and smart caching hierarchies to realize maximum performance under different operating conditions, with serverless environments offering provisionless scaling of analytical workloads without any manual infrastructure provisioning. Graph-stateless workflow systems utilize adaptive scheduling algorithms and thorough fault tolerance mechanisms to facilitate the reliable execution of the processing pipeline in distributed computing environments. AI frameworks incorporate ensemble learning techniques and decision-making systems with automation to offer predictive analytics features and intelligent workflow orchestration. Implementation tactics consist of data-driven parameter optimization methods and privacy-improving analytics mechanisms that ensure the best performance with regulatory compliance and data safety requirements upheld throughout the complete processing life cycle.

  • Research Article
  • 10.31130/ud-jst.2025.23(9a).329e
Efficient chatbot for university admission consultation using large language models
  • Sep 30, 2025
  • The University of Danang - Journal of Science and Technology
  • Truc Thi Kim Nguyen + 3 more

This paper presents an Artificial Intelligence–driven chatbot for university admission consultation using Large Language Models (LLMs). The system integrates semantic retrieval with Retrieval-Augmented Generation (RAG) and employs a hybrid strategy that combines vector similarity and keyword matching to provide accurate and context-aware answers. The chatbot was trained on admission FAQs, official documents, and consultation records from Dong A University, ensuring relevance to real user needs. Implementation leverages efficient prompt construction and memory management to support interactive and personalized responses. Experimental results show improved retrieval precision and practical benefits in reducing staff workload and offering consistent support to prospective students. Current limitations include the use of a single-university dataset and a technical evaluation focused on retrieval metrics. Future work will expand to multi-institution data, user studies, and multilingual or voice-enabled interaction to enhance generalizability and real-world impact.

  • Research Article
  • 10.1145/3760257
RIMMS: Runtime Integrated Memory Management System for Heterogeneous Computing
  • Sep 26, 2025
  • ACM Transactions on Embedded Computing Systems
  • Serhan Gener + 7 more

Efficient memory management in heterogeneous systems is increasingly challenging due to diverse compute architectures (e.g., CPU, GPU, and FPGA) and dynamic task mappings not known at compile time. Existing approaches often require programmers to manage data placement and transfers explicitly, or assume static mappings that limit portability and scalability. This article introduces RIMMS (Runtime Integrated Memory Management System), a lightweight, runtime-managed, hardware-agnostic memory abstraction layer that decouples application development from low-level memory operations. RIMMS transparently tracks data locations, manages consistency, and supports efficient memory allocation across heterogeneous compute elements without requiring platform-specific tuning or code modifications. We integrate RIMMS into a baseline runtime and evaluate with complete radar signal processing applications across CPU+GPU and CPU+FPGA platforms. RIMMS delivers up to 2.43× speedup on GPU-based and 1.82× on FPGA-based systems over the baseline. Compared to IRIS, a recent heterogeneous runtime system, RIMMS achieves up to 3.08X speedup and matches the performance of native CUDA implementations while significantly reducing programming complexity. Despite operating at a higher abstraction level, RIMMS incurs only 1–2 cycles of overhead per memory management call, making it a low-cost solution. These results demonstrate RIMMS’s ability to deliver high performance and enhanced programmer productivity in dynamic, real-world heterogeneous environments.

  • Research Article
  • 10.59573/emsj.9(5).2025.39
Optimizing Real-Time Audio Filters for Low-Latency Voice Systems
  • Sep 11, 2025
  • European Modern Studies Journal
  • Rahul Singh Thakur

Audio processing systems in voice-controlled environments require advanced optimization methods to realize minimal latency without sacrificing top-notch audio quality. Current embedded devices suffer from extreme challenges when performing advanced audio transformations such as volume control, fade effects, trimming operations, and repetition processes within tight computational limitations. Sophisticated buffer management designs that employ ring buffers and double buffering approaches achieve dramatic performance boosts through optimal memory management and minimum memory fragmentation. Vector filter operations utilizing SIMD capabilities exhibit fantastic processing improvements, with huge throughput gains over conventional scalar approaches. Hardware acceleration utilizing specialized DSP units and GPU coprocessors facilitates parallel architecture processing that achieves significant computing overhead reductions. Multi-threading optimization techniques using producer-consumer patterns and lock-free data structures preserve system responsiveness to concurrently process multiple audio streams. Adaptive filter choice algorithms dynamically regulate processing complexity relative to system resources and audio content properties, enabling smart resource allocation without sacrifice of output quality. Predictive processing methods based on lookahead algorithms and caching mechanisms temporally distribute computational burdens, avoiding processing spikes that may introduce unacceptable latency variability. Diverse performance assessment frameworks quantifying latency, throughput, power usage, and audio fidelity guarantee optimal system setup under varied operational environments. The combination of sophisticated algorithmic optimizations and hardware-specific implementation allows embedded audio systems to satisfy stringent real-time requirements while working within limited power and processing budgets common in today's smart speaker architectures.

  • Research Article
  • 10.59573/emsj.9(4).2025.120
Heterogeneous Memory Systems: A Comprehensive Analysis of Hotness-Based Approaches
  • Sep 1, 2025
  • European Modern Studies Journal
  • Pramod Peethambaran + 4 more

This article comprehensively analyses hotness-based memory page movement strategies in multi-node heterogeneous memory systems. As modern computing environments increasingly adopt diverse memory technologies such as CXL, PIM, and HBM, efficiently managing memory pages across different tiers with varying latencies and bandwidths becomes crucial for system performance. We examine three primary monitoring techniques: PTE-scan methodology, fault induction monitoring, and PMU sampling approaches, evaluating their effectiveness, scalability, and implementation challenges. The article further explores dedicated hardware-based solutions, mainly focusing on CXL monitoring architectures and their advantages over traditional software approaches. Our analysis reveals that while each methodology offers distinct benefits, the future of memory management lies in hybrid solutions that combine hardware precision with software flexibility. We discuss emerging trends in this field, including integrating machine learning techniques and adaptive algorithms for predictive memory management, providing insights into the evolution of memory tiering solutions. The article suggests that as memory hierarchies become more complex, the development of intelligent, self-optimizing memory management systems will be essential for maintaining optimal system performance while balancing capacity and bandwidth requirements.

  • Open Access Icon
  • Research Article
  • 10.1109/tvcg.2024.3411786
Nanomatrix: Scalable Construction of Crowded Biological Environments.
  • Sep 1, 2025
  • IEEE transactions on visualization and computer graphics
  • Ruwayda Alharbi + 3 more

We present a novel method for the interactive construction and rendering of extremely large molecular scenes, capable of representing multiple biological cells in atomistic detail. Our method is designed for scenes that are procedurally constructed based on a given set of building rules. Rendering large scenes typically requires the entire scene to be available in-core, or alternatively, it requires out-of-core management to load data into the memory hierarchy as a part of the rendering loop. Instead of out-of-core memory management, we propose procedurally generating the scene on-demand on the fly. The key concept is a positional- and view-dependent procedural scene-construction strategy, where only a fraction of the atomistic scene around the camera is available in the GPU memory at any given time. The atomistic detail is populated into a uniform-space partitioning using a grid covering the entire scene. Most grid cells are not filled with geometry, only those that are potentially seen by the camera are populated. The atomistic detail is populated in a compute shader and its representation is connected with acceleration data structures for hardware ray-tracing of modern GPUs. Distant objects, where atomistic detail is not perceivable from a given viewpoint, are represented by a triangle mesh mapped with a seamless texture generated from the rendering of geometry with atomistic detail. The algorithm consists of two pipelines, the construction-compute pipeline and rendering pipeline, which work together to render molecular scenes at an atomistic resolution beyond the limit of the GPU memory containing trillions of atoms. The proposed technique is demonstrated on multiple models of SARS-CoV-2 and the red blood cell.

  • Research Article
  • 10.3390/s25165103
Performance Evaluation of ChaosFortress Lightweight Cryptographic Algorithm for Data Security in Water and Other Utility Management
  • Aug 17, 2025
  • Sensors (Basel, Switzerland)
  • Rohit Raphael + 3 more

The Internet of Things (IoT) has become an integral part of today’s smart and digitally connected world. IoT devices and technologies now connect almost every aspect of daily life, generating, storing, and analysing vast amounts of data. One important use of IoT is in utility management, where essential services such as water are supplied through IoT-enabled infrastructure to ensure fair, efficient, and sustainable delivery. The large volumes of data produced by water distribution networks must be safeguarded against manipulation, theft, and other malicious activities. Incidents such as the Queensland user data breach (2020–21), the Oldsmar water treatment plant attack (2021), and the Texas water system overflow (2024) show that attacks on water treatment plants, distribution networks, and supply infrastructure are common in Australia and worldwide, often due to inadequate security measures and limited technical resources. Lightweight cryptographic algorithms are particularly valuable in this context, as they are well-suited for resource-constrained hardware commonly used in IoT systems. This study focuses on the in-house developed ChaosFortress lightweight cryptographic algorithm, comparing its performance with other widely used lightweight cryptographic algorithms. The evaluation and comparative testing used an Arduino and a LoRa-based transmitter/receiver pair, along with the NIST Statistical Test Suite (STS). These tests assessed the performance of ChaosFortress against popular lightweight cryptographic algorithms, including ACORN, Ascon, ChaChaPoly, Speck, tinyAES, and tinyECC. ChaosFortress was equal in performance to the other algorithms in overall memory management but outperformed five of the six in execution speed. ChaosFortress achieved the quickest transmission time and topped the NIST STS results, highlighting its strong suitability for IoT applications.

  • Research Article
  • 10.3991/ijim.v19i15.55713
Optimizing Memory Usage in Android Smartphones: A Comparative Analysis of Data Structures Across Different Hardware Architectures
  • Aug 13, 2025
  • International Journal of Interactive Mobile Technologies (iJIM)
  • Lucia Nugraheni Harnaningrum + 4 more

Efficient memory management is a critical factor in enhancing the performance of mobile applications, particularly in resource-constrained environments. This study comprehensively evaluates memory consumption across various data structures on Android smartphones with different hardware architectures, including Snapdragon 732G, Snapdragon 805, and Dimensity 9300. The analysis employs statistical metrics such as standard deviation, minimum, median, and maximum memory usage to assess different data structures’ efficiency. Empirical results demonstrate that primitive data structures exhibit significantly lower memory overhead than more complex structures such as LinkedList and ArrayList, which tend to increase memory fragmentation and garbage collection (GC) overhead. A significant change is the testing of the Primitive Array data structure with API 30 to API 33, which experienced a decrease in memory usage of almost 61%. These findings offer valuable insights for Android developers, enabling them to make informed decisions in selecting optimal data structures to enhance memory efficiency, reduce application latency, and improve overall user experience.

  • Research Article
  • 10.59573/emsj.9(4).2025.28
Multi-Sensor Data Fusion Architecture in Connected Consumer Devices
  • Aug 11, 2025
  • European Modern Studies Journal
  • Ankit Rana

This article explores advanced sensor fusion techniques in embedded hardware for context-aware consumer devices, addressing the integration challenges of multiple heterogeneous sensors in intelligent, responsive systems. The article explores the evolution from single-sensor to multi-sensor architectures in modern consumer electronics, detailing how sophisticated fusion algorithms combine data from diverse sources, including inertial measurement units, environmental sensors, biometric monitors, and imaging systems. It discusses fundamental mathematical frameworks underlying sensor fusion—Kalman filters, complementary filters, and particle filters—while addressing implementation challenges such as data synchronization, temporal alignment, and noise reduction in resource-constrained embedded systems. Hardware design considerations are analyzed, covering processing architectures from microcontrollers to specialized co-processors, power optimization strategies for battery-operated devices, and memory management techniques balancing performance with limited resources. The article explores ecosystem connectivity through various communication protocols, edge versus cloud processing trade-offs, standardization efforts for interoperability, and privacy-security considerations in multi-device environments. Through case studies in smart home automation, health monitoring wearables, and ambient intelligence systems, the article demonstrates real-world applications while identifying emerging trends such as AI-enhanced fusion, neuromorphic processing, and federated learning that promise to revolutionize future context-aware systems.

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • .
  • .
  • .
  • 10
  • 1
  • 2
  • 3
  • 4
  • 5

Popular topics

  • Latest Artificial Intelligence papers
  • Latest Nursing papers
  • Latest Psychology Research papers
  • Latest Sociology Research papers
  • Latest Business Research papers
  • Latest Marketing Research papers
  • Latest Social Research papers
  • Latest Education Research papers
  • Latest Accounting Research papers
  • Latest Mental Health papers
  • Latest Economics papers
  • Latest Education Research papers
  • Latest Climate Change Research papers
  • Latest Mathematics Research papers

Most cited papers

  • Most cited Artificial Intelligence papers
  • Most cited Nursing papers
  • Most cited Psychology Research papers
  • Most cited Sociology Research papers
  • Most cited Business Research papers
  • Most cited Marketing Research papers
  • Most cited Social Research papers
  • Most cited Education Research papers
  • Most cited Accounting Research papers
  • Most cited Mental Health papers
  • Most cited Economics papers
  • Most cited Education Research papers
  • Most cited Climate Change Research papers
  • Most cited Mathematics Research papers

Latest papers from journals

  • Scientific Reports latest papers
  • PLOS ONE latest papers
  • Journal of Clinical Oncology latest papers
  • Nature Communications latest papers
  • BMC Geriatrics latest papers
  • Science of The Total Environment latest papers
  • Medical Physics latest papers
  • Cureus latest papers
  • Cancer Research latest papers
  • Chemosphere latest papers
  • International Journal of Advanced Research in Science latest papers
  • Communication and Technology latest papers

Latest papers from institutions

  • Latest research from French National Centre for Scientific Research
  • Latest research from Chinese Academy of Sciences
  • Latest research from Harvard University
  • Latest research from University of Toronto
  • Latest research from University of Michigan
  • Latest research from University College London
  • Latest research from Stanford University
  • Latest research from The University of Tokyo
  • Latest research from Johns Hopkins University
  • Latest research from University of Washington
  • Latest research from University of Oxford
  • Latest research from University of Cambridge

Popular Collections

  • Research on Reduced Inequalities
  • Research on No Poverty
  • Research on Gender Equality
  • Research on Peace Justice & Strong Institutions
  • Research on Affordable & Clean Energy
  • Research on Quality Education
  • Research on Clean Water & Sanitation
  • Research on COVID-19
  • Research on Monkeypox
  • Research on Medical Specialties
  • Research on Climate Justice
Discovery logo
FacebookTwitterLinkedinInstagram

Download the FREE App

  • Play store Link
  • App store Link
  • Scan QR code to download FREE App

    Scan to download FREE App

  • Google PlayApp Store
FacebookTwitterTwitterInstagram
  • Universities & Institutions
  • Publishers
  • R Discovery PrimeNew
  • Ask R Discovery
  • Blog
  • Accessibility
  • Topics
  • Journals
  • Open Access Papers
  • Year-wise Publications
  • Recently published papers
  • Pre prints
  • Questions
  • FAQs
  • Contact us
Lead the way for us

Your insights are needed to transform us into a better research content provider for researchers.

Share your feedback here.

FacebookTwitterLinkedinInstagram
Cactus Communications logo

Copyright 2025 Cactus Communications. All rights reserved.

Privacy PolicyCookies PolicyTerms of UseCareers