• All Solutions All Solutions Caret
    • Editage

      One platform for all researcher needs

    • Paperpal

      AI-powered academic writing assistant

    • R Discovery

      Your #1 AI companion for literature search

    • Mind the Graph

      AI tool for graphics, illustrations, and artwork

    • Journal finder

      AI-powered journal recommender

    Unlock unlimited use of all AI tools with the Editage Plus membership.

    Explore Editage Plus
  • Support All Solutions Support
    discovery@researcher.life
Discovery Logo
Sign In
Paper
Search Paper
Cancel
Pricing Sign In
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Chat PDF iconChat PDF Star Left icon
  • Citation Generator iconCitation Generator
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link
Discovery Logo menuClose menu
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Chat PDF iconChat PDF Star Left icon
  • Citation Generator iconCitation Generator
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link

Related Topics

  • Compression Scheme
  • Compression Scheme

Articles published on Adaptive compression

Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
320 Search results
Sort by
Recency
  • New
  • Research Article
  • 10.15407/jai2025.04.088
Structured Pruning Method for Large Language Models with Adaptive Compression Ratios
  • Dec 30, 2025
  • Artificial Intelligence
  • Shvets V + 1 more

The article addresses the important challenge of deploying large language models (LLMs) on resource-constrained devices. We analyze the evolution of neural network pruning methods from classical approaches (Optimal Brain Damage, Optimal Brain Surgeon) to modern one-shot techniques for LLMs (SparseGPT, Wanda, SliceGPT, 2SSP). The research demonstrates that while unstructured pruning achieves high compression ratios with small quality loss, it fails to provide real size reduction and inference acceleration on standard hardware due to chaotic sparse matrix structures. In contrast, structured pruning methods ensure hardware efficiency by removing entire structural blocks. We propose the Adaptive 2SSP method (modification of the 2SSP method), which combines adaptive compression ratio selection based on block redundancy with two-stage structured pruning: attention block removal (depth pruning) followed by FFN layer neuron removal (width pruning). Experimental validation on Llama-3.2-3B, Llama-2-7B, and Qwen2.5-3B models demonstrates the method's superiority over existing alternatives (GLU Aware Pruning, Dynamic Slicing, original 2SSP). When removing 40% of Llama-3.2-3B parameters, the method maintains perplexity at 26.35 and average benchmark accuracy at 39.57%, representing the best results among compared methods. Hardware efficiency evaluation for Llama-3.2-3B achieved 35.12% reduction in VRAM consumption and 34.78% acceleration in token generation. For Llama-2-7B, a 3.7-fold speedup was obtained at 20% pruning by overcoming VRAM limitations. The results demonstrate that the proposed method provides an optimal balance between compression degree, execution speed, and model quality preservation, making it an effective tool for adapting modern LLMs to deployment on devices with limited computational resources

  • Research Article
  • 10.3390/w18010048
A High-Precision Daily Runoff Prediction Model for Cross-Border Basins: RPSEMD-IMVO-CSAT Based on Multi-Scale Decomposition and Parameter Optimization
  • Dec 23, 2025
  • Water
  • Tianming He + 4 more

As the last critical hydrological control station on the Lancang River before it flows out of China, the daily runoff variations at the Yunjinghong Hydrological Station are directly linked to agricultural irrigation, hydropower development, and ecological security in downstream Mekong River riparian countries such as Laos, Myanmar, and Thailand. Aiming at the core issues of the runoff sequence in the Lancang–Mekong Basin, which is characterized by prominent nonlinearity, non-stationarity, and coupling of multi-scale features, this study proposes a synergistic prediction framework of “multi-scale decomposition-model improvement-parameter optimization”. Firstly, Regenerated Phase-Shifted Sine-Assisted Empirical Mode Decomposition (RPSEMD) is adopted to adaptively decompose the daily runoff data. On this basis, a Convolutional Sparse Attention Transformer (CSAT) model is constructed. A one-dimensional convolutional neural network (1D-CNN) module is embedded in the input layer to enhance local feature perception, making up for the deficiency of traditional Transformers in capturing detailed information. Meanwhile, the sparse attention mechanism replaces the multi-head attention, realizing efficient focusing on key time-step correlations and reducing computational costs. Additionally, an Improved Multi-Verse Optimizer (IMVO) is introduced, which optimizes the hyperparameters of CSAT through a spiral update mechanism, exponential Travel Distance Rate (T_DR), and adaptive compression factor, thereby improving the model’s accuracy in capturing short-term abrupt patterns such as flood peaks and drought transition points. Experiments are conducted using measured daily runoff data from 2010 to 2022, and the proposed model is compared with mainstream models such as LSTM, GRU, and standard Transformer. The results show that the RPSEMD-IMVO-CSAT model reduces the Mean Absolute Error (MAE) and Root Mean Square Error (RMSE) by 15.3–28.7% and 18.6–32.4%, respectively, compared with the comparative models.

  • Research Article
  • 10.51983/ijiss-2025.ijiss.15.4.19
Compressed Data Representation Methods for High-Speed Search
  • Dec 15, 2025
  • Indian Journal of Information Sources and Services
  • V Aruna + 5 more

The current age of computing revolves around data; the ability to fetch and store large quantities of information has become imperative for systems such as embedded systems and even search engines. Methods of compressed data representation are vital, as they enable faster query execution while reducing the storage space needed. This paper has analyzed such methods. The authors have reviewed bitmap indexing, inverted index compression, succinct data structures, LZ-based schemes, and compressed tries based on the set criteria of practical usefulness, search performance, and space efficiency. Through qualitative metrics, the authors performed a comparative evaluation, which is then represented in a conceptual figure and through tables. Moreover, the paper analyzes potential use case scenarios in domains such as bioinformatics, log management, edge computing, and AI-powered search pipelines. Other issues that have been explored include a balance between compression and query latency, optimizing for heterogeneous hardware, encrypted data search, and searching through encrypted data. The findings illuminate previously unexplored areas of research, including learned indexing, adaptive compression, and searching with minimal energy expenditure.

  • Research Article
  • 10.4314/swj.v20i3.47
An adaptive compression factor error level analysis for image forgery classification
  • Dec 14, 2025
  • Science World Journal
  • Abdulqadir Hamza + 2 more

The intentional manipulation of visual data has been increasing due to the widespread use of image editing software and social media websites, challenging existing forgery detection methods. Error Level Analysis (ELA) based methods often struggle with JPEG compression, limiting their ability to detect tampering accurately. This paper proposes an adaptive compression mechanism to enhance ELA-based image forgery detection, particularly for augmented and expanded datasets. Using the CASIA V2 image forgery dataset with rotation, flipping, and scaling, ELA maps were derived and classified via a Convolutional Neural Network (CNN). The experimental results indicate that the proposed method achieved a better performance with accuracy, precision, recall, and F1-score of 96.6%, 96.8%, 96.3%, and 96.5%, respectively.

  • Research Article
  • Cite Count Icon 1
  • 10.1016/j.neucom.2025.131071
DCHF_T: A multi-dimensional adaptive compression approach for transformer-based models
  • Dec 1, 2025
  • Neurocomputing
  • Yaoyao Yan + 7 more

DCHF_T: A multi-dimensional adaptive compression approach for transformer-based models

  • Research Article
  • 10.36676/dira.v13.i4.182
Federated Curriculum Learning for Privacy-Preserving Personalization in Edge Environments
  • Nov 13, 2025
  • Darpan International Research Analysis
  • Arvind D Mehta

Personalized machine learning models often require centralizing sensitive user data, creating privacy and compliance challenges. This paper introduces FedCurv, a federated curriculum learning framework designed for edge environments where users have heterogeneous devices and varied data distributions. FedCurv enhances convergence by organizing client updates into a difficulty-aware curriculum. Each client computes a sample-level difficulty score using local gradient variance and confidence metrics. The server aggregates updates in curriculum stages, prioritizing stable and low-variance updates before incorporating high-uncertainty ones. Experiments were conducted on image classification, keyboard-input prediction, and personalized recommendation datasets. FedCurv improves accuracy by 7–15% over FedAvg under non-IID settings and reduces client-side energy consumption. A real-world deployment simulation on 300 mobile devices demonstrates faster stabilization of personalized models while maintaining differential privacy guarantees. The paper discusses limits posed by heterogeneous hardware and suggests adaptive compression techniques for bandwidth-constrained users.

  • Research Article
  • 10.55041/ijsrem53367
Image Auto-Compression using Sharp and AWS Lambda
  • Oct 31, 2025
  • INTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT
  • Ms Farhina S Sayyad + 1 more

Abstract— In today’s digital era, users frequently upload high-resolution images, which often lead to system performance issues, slower load times, and excessive cloud storage usage. Manual image optimization remains inefficient and prone to human error for both developers and end-users. This paper introduces an automated, serverless image optimization pipeline utilizing AWS Lambda in combination with the Sharp.js library. When an image is uploaded to Amazon S3, it activates a Lambda function that automatically compresses and optimizes the image into a web-friendly format without noticeable quality degradation. This approach enables real-time image compression without the need for backend server management, thereby minimizing storage requirements, improving application speed, and enhancing user experiences across various platforms. In the modern internet-driven landscape, images represent a significant portion of the data transmitted across both web and mobile applications. Studies indicate that over 65% of webpage data weight is attributed to images, underlining the necessity of efficient image management. While high-resolution visuals are crucial for superior user engagement, they increase bandwidth consumption, load time, and cloud storage expenses. Traditional optimization approaches demand manual pre-processing or rely on specialized backend servers, which introduces inefficiency, cost, and maintenance challenges. This study proposes a completely automated, serverless pipeline for image compression and optimization using AWS Lambda and Sharp.js. Leveraging AWS Lambda’s eventdriven framework, the system triggers compression operations whenever new images are uploaded to S3. Sharp.js, built upon the efficient libvips engine, performs resizing and compression operations while maintaining visual quality. The integration of serverless computing with this high-performance library ensures real-time automation, scalability, and cost efficiency. Furthermore, this research introduces two innovative enhancements: 1. A Deep Reinforcement Learning (DRL)-based predictive resource provisioning mechanism that mitigates cold start latency. 2. A Semantic-Aware Adaptive Compression (S-ADC) algorithm that intelligently modifies compression settings based on image content and semantic complexity. Experimental evaluations conducted across formats such as JPEG, PNG, WebP, and AVIF reveal considerable reductions in file size while preserving visual fidelity. The proposed system not only enhances accessibility for users with limited bandwidth but also reduces cloud expenses and supports sustainable computing practices. By merging serverless infrastructure with adaptive intelligence, this work delivers a scalable, cost-effective, and eco- friendly solution for image optimization applicable to real-world web and mobile platforms. Keywords—Cloud Computing, Serverless Architecture, AWS Lambda, Sharp.js, Image Compression, Reinforcement Learning, Adaptive Compression, Media Optimization, Cloud Efficiency.

  • Research Article
  • 10.21512/ijcshai.v2i2.14533
Adaptive Gradient Compression: An Information-Theoretic Analysis of Entropy and Fisher-Based Learning Dynamics
  • Oct 30, 2025
  • International Journal of Computer Science and Humanitarian AI
  • Hidayaturrahman Hidayaturrahman

Deep neural networks require intensive computation and communication due to the large volume of gradient updates exchanged during training. This paper investigates Adaptive Gradient Compression (AGC), an information-theoretic framework that reduces redundant gradients while preserving learning stability. Two independent compression mechanisms are analyzed: an entropy-based scheme, which filters gradients with low informational uncertainty, and a Fisher-based scheme, which prunes gradients with low sensitivity to the loss curvature. Both approaches are evaluated on the CIFAR-10 dataset using a ResNet-18 model under identical hyperparameter settings. Results show that entropy-guided compression achieves a 33.8× reduction in gradient density with only a 4.4% decrease in test accuracy, while Fisher-based compression attains 14.3× reduction and smoother convergence behavior. Despite modest increases in per-iteration latency, both methods maintain stable training and demonstrate that gradient redundancy can be systematically controlled through information metrics. These findings highlight a new pathway toward information-aware optimization, where learning efficiency is governed by the informational relevance of gradients rather than their magnitude alone. Furthermore, this study emphasizes the practical significance of integrating information theory into deep learning optimization. By selectively transmitting gradients that carry higher information content, AGC effectively mitigates communication bottlenecks in distributed training environments. Experimental analyses further reveal that adaptive compression dynamically adjusts to training dynamics, providing robustness across various learning stages. The proposed framework can thus serve as a foundation for developing future low-overhead optimization methods that balance accuracy, stability, and efficiency, and crucial aspects for large-scale deep learning deployments in edge and cloud computing contexts.

  • Research Article
  • 10.63163/jpehss.v3i4.761
Minimization of Ethereum Transaction Fees Using AI and Compression Techniques
  • Oct 25, 2025
  • Physical Education, Health and Social Sciences
  • Ramia Arshad + 4 more

Blockchain technology has transformed decentralized data exchange and digital payments but the consistently high gas prices pose a significant challenge to its scalability and efficiency. This research explores the role of AI-driven gas price prediction and data compression methods on gas utilization in blockchain systems with special emphasis on Ethereum transactions. Using actual Ethereum transaction history, we compare the performance of compressed versus uncompressed payloads with three different compression algorithms: Zlib, Brotli, and Gzip. Beyond that, a linear regression model is also trained to forecast hourly gas Price fluctuations given past transaction history. The methodology includes thorough statistical analysis to provide accurate and reproducible results. Our results show that compressing text data over 141 bytes using the Zlib algorithm prior to making transactions on the Ethereum network decreases the amount of gas Used without altering system time. This validates the efficiency of combining data compression with gas price forecasting in minimizing transaction costs without affecting performance. Moreover, our study further encompasses investigation of actual gas Price trends and provides real-world insights for optimizing timing strategies for economic transaction execution. These results enhance the knowledge of Ethereum gas dynamics and provide valuable solutions for enhancing economic efficiency and resource utilization in applications based on blockchain. Future efforts will involve applying the framework to the Ethereum mainnet, using deep learning models for increased prediction accuracy, and adaptive compression dependent on network state and transaction size.

  • Research Article
  • 10.52783/jisem.v10i60s.13228
Federated Learning-Enabled Cloud-Edge Architecture: Design Patterns and Systems Integration
  • Sep 30, 2025
  • Journal of Information Systems Engineering and Management
  • Satya Teja Muddada

The rapid growth of edge computing infrastructure and ever more rigid privacy laws has revolutionized machine learning paradigms at their very foundations, requiring the shift from centralized to advanced distributed learning frameworks. Federated learning stands out as a groundbreaking computational model that allows collaborative model training over decentralized data sources with complete data locality and individual privacy preservation. Conventional server-based federated learning solutions face significant challenges when implemented in heterogeneous edge environments with fluctuating network connectivity, extreme fluctuations in computational powers, and highly non-independent data distribution patterns capturing diversified geographical and demographic features. Cloud-edge collaborative architectures, which have recently emerged to bridge these multi-dimensional challenges, overcome these challenges through advanced hierarchical aggregation techniques, strategically tapping the complementary computational powers of edge nodes and centralized cloud resources. Higher-level hierarchical designs exhibit improved convergence performance with the capability to support intermediate aggregation at edge levels, lowering communication overhead through localized knowledge consolidation operations that reflect regional data properties and usage patterns. The combination of several aggregation layers with resource-conscious scheduling policies, adaptive compression algorithms, and holistic privacy protection mechanisms provides strong foundation architecture for production-quality federated learning implementations with adaptive client participation patterns, support for rich hardware heterogeneity via adaptive resource scheduling, and provably guaranteed privacy while ensuring reasonable model performance on various application domains such as telecommunications, healthcare, and industrial Internet of Things installations.

  • Research Article
  • Cite Count Icon 1
  • 10.1007/s10586-025-05547-y
Optimizing blockchain file storage: enhancing performance and reducing ledger size with adaptive compression and advanced data structures
  • Sep 19, 2025
  • Cluster Computing
  • Muhammed Tmeizeh + 2 more

Abstract Ensuring the digital preservation of files and data in an immutable environment is essential for maintaining security, integrity, and trust. The decentralized architecture, robust security, and reliability of blockchain position it as a leading solution for storage technologies requiring tamper-proof integrity. This work presents an enhanced version of a previously published framework, introducing a technique to optimize on-chain file storage efficiency. The proposed solution leverages a blockchain ledger for data storage through a client and smart contract framework, integrating an optimal file compression technique, Google Protocol Buffers architecture, file chunking, and a verification process to minimize ledger size growth and enhance retrieval performance. Experimental results demonstrate that the enhanced framework achieves greater ledger size reduction than its predecessor, leading to improved data storage and retrieval efficiency. These improvements make the framework better suited for applications requiring an immutable storage environment, such as medical records, digital certificates, the judicial sector, and other similar domains.

  • Research Article
  • 10.15587/1729-4061.2025.335729
Speeding up binomial compression based on binary binomial numbers
  • Aug 29, 2025
  • Eastern-European Journal of Enterprise Technologies
  • Igor Kulyk + 3 more

This study's object is adaptive compression of general-form binary sequences based on binary binomial numbers. The task addressed is to enable high compression speed of binary information based on binomial numbers under the condition of uncertainty in the characteristics of the binary sequences being compressed. One of the factors that reduce the efficiency of binomial compression is uncontrolled transitions of the number of unit combinations to the region of inefficient use, the worst compression ratios. In this regard, the work applies an adaptive approach to binomial compression, based on the choice of an encoding technique depending on the number of units of the processed sequence. This approach yields the following result: a several-fold reduction in the amount of time spent processing binary combinations that are not compressible. Consequently, this leads to an increase in the average speed of binomial compression with a small, up to three to five percent, decrease in the compression ratio. The adaptive compression process model includes the stages of comparing the calculated numbers of binary units with the compression conditions and selecting the coding technique based on binary binomial numbers. If the current value of the number of units goes beyond the compression conditions, the calculation of the number of units is stopped, and the processed sequence remains unchanged. This eliminates unnecessary time costs when the compression ratio becomes less than unity. In practice, the adaptive approach to compression based on binary binomial numbers is effective in the case when the binary sequences being compressed have uncertain characteristics, and their preliminary evaluation is impossible or difficult

  • Research Article
  • 10.1111/ddg.15829
Treating cellulitis promptly with compression therapy reduces C‐reactive protein‐levels and symptoms – a randomized‐controlled trial
  • Aug 11, 2025
  • Journal Der Deutschen Dermatologischen Gesellschaft
  • Sören Dräger + 3 more

SummaryBackground and ObjectivesCellulitis is an acute bacterial infection of the skin. Initial treatment primarily consists of systemic antibiotic therapy. Compression therapy is subsequently introduced to reduce edema. However, the optimal timing for initiating compression therapy remains a subject of debate, as early application is thought to potentially exacerbate the infection.Patients and MethodsThis study was designed as a prospective, randomized controlled trial. Patients admitted for treatment of lower leg cellulitis were recruited and randomly assigned in a 1:1 ratio. In addition to standard therapy, the intervention group received compression therapy one day after initiation of antibiotic treatment, using medical adaptive compression wraps. C‐reactive protein (CRP) levels, reduction in erythema, and patient‐reported symptoms were recorded.ResultsA total of 34 patients were included in the analysis. Early application of medical adaptive compression wraps alleviated symptoms without causing complications. In patients with initial CRP levels above 50 mg/dl at admission, CRP reduction occurred more rapidly.ConclusionsOur data suggest that the early application of medical adaptive compression wraps within 24 hours of initiating antibiotic treatment alleviates symptoms, supports recovery, and does not lead to worsening of inflammation.

  • Research Article
  • 10.32996/jcsts.2025.7.7.72
Optimizing Real-Time Bidding (RTB) Latency in Ad Exchanges: A Comprehensive Analysis
  • Jul 17, 2025
  • Journal of Computer Science and Technology Studies
  • Subhash Vinnakota

This examination explores the critical role of latency optimization in Real-Time Bidding (RTB) systems within programmatic advertising. Beginning with foundational RTB mechanics, the discussion identifies key contributors to system latency including network transmission delays, DSP processing constraints, SSP auction dynamics, and ad rendering challenges. Technical approaches to latency reduction are analyzed across multiple domains: network optimization through edge computing and data compression; computational efficiency improvements via parallelization and caching; auction mechanism refinements; and rendering performance enhancements. The integration of artificial intelligence and machine learning represents a transformative advancement, with applications including predictive bidding models, dynamic routing systems, adaptive compression techniques, real-time performance monitoring, and self-optimizing infrastructures. The business impact assessment demonstrates how latency optimization delivers measurable benefits to publishers through enhanced bid participation, to advertisers through improved targeting capabilities, and to users through superior browsing experiences. Future directions point toward edge AI deployment, 5G connectivity integration, decentralized exchange architectures, and privacy-centric processing models as emerging opportunities alongside remaining research gaps in cross-platform optimization and holistic end-to-end approaches.

  • Research Article
  • 10.1088/2631-8695/ade657
A cloud-edge collaborative deep network for signal compression and reconstruction in aerospace testing
  • Jul 1, 2025
  • Engineering Research Express
  • Youlong Lyu + 4 more

Abstract To address the real-time processing requirements of massive multi-source signals in aerospace product integrated testing, this paper proposes a cloud-edge collaborative signal compression and reconstruction method based on a deep compressed sensing network. Targeting the transmission bottlenecks in cloud-edge architectures and the fragmentation of temporal signal dependencies, a dual-stage optimization method is developed: (1) At the edge side, a dual-branch convolutional compression network is designed to achieve adaptive compression of multi-form signals through global feature observation and local attention enhancement. (2) On the cloud side, a bidirectional LSTM (BiLSTM) combined with a progressive stacking structure is employed to establish a cross-temporal signal correlation reconstruction mechanism. The proposed method is evaluated on both public dataset (500 Hz, 12-channel, n = 600) and real-world dataset (1000 Hz, 190k points/signal, n = 396). Experimental results demonstrate superior performance over traditional compressed sensing and deep learning methods, achieving lower reconstruction errors while maintaining high compression rates, thereby effectively balancing the trade-off between compression efficiency and reconstruction fidelity.

  • Research Article
  • 10.29196/jubpas.v33i2.5778
A Comparative Study of Compression Techniques for Medical Images
  • Jun 30, 2025
  • JOURNAL OF UNIVERSITY OF BABYLON for Pure and Applied Sciences
  • Hadeel Talib Mangi + 5 more

Background: Accurate diagnosis and treatment rely on medical imaging, which presents challenges due to the vast data generated by MRIs and CT scans. Managing such volumes is complex in storage and transmission. Efficient image compression techniques are essential for telemedicine and cloud-based systems, enabling seamless data transfer while preserving quality. Materials and Methods: This study compares three widely used compression techniques: Adaptive Huffman Coding (lossless), Discrete Cosine Transform (DCT) (lossy), and Adaptive Multi-Layer Run-Length Encoding (AMLRLE) (lossless). A dataset of DICOM medical images was used, and techniques were evaluated based on three key performance metrics: compression ratio (CR) for data reduction, processing time (PT) for computational efficiency, and Peak Signal-to-Noise Ratio (PSNR) for assessing image quality. Results: Huffman Coding, a lossless technique, achieved a high compression ratio of 0.972 with an average compression time of 0.028 seconds. However, it exhibited lower image quality than DCT and AMLRLE. DCT, a lossy method that converts image data into frequency components, provided a compression ratio of 0.964, a processing time of 0.088 seconds, and a PSNR of 317.55 dB. AMLRLE, another lossless technique, showed performance nearly identical to DCT, maintaining the same compression ratio, processing time, and PSNR. Conclusion: Huffman Coding suits applications needing fast processing, while DCT and AMLRLE are better for high-quality imaging. The choice of compression method depends on system needs—speed, storage, or diagnostic precision. Future research will integrate these techniques with machine learning to enhance adaptive compression for medical imaging.

  • Research Article
  • 10.1145/3715773
An Adaptive Language-Agnostic Pruning Method for Greener Language Models for Code
  • Jun 19, 2025
  • Proceedings of the ACM on Software Engineering
  • Mootez Saad + 4 more

Language models of code have demonstrated remarkable performance across various software engineering and source code analysis tasks. However, their demanding computational resource requirements and consequential environmental footprint remain as significant challenges. This work introduces ALPINE, an adaptive programming language-agnostic pruning technique designed to substantially reduce the computational overhead of these models. The proposed method offers a pluggable layer that can be integrated with all Transformer-based models. With ALPINE, input sequences undergo adaptive compression throughout the pipeline, reaching a size that is up to x3 less their initial size, resulting in significantly reduced computational load. Our experiments on two software engineering tasks, defect prediction and code clone detection across three language models CodeBERT, GraphCodeBERT and UniXCoder show that ALPINE achieves up to a 50% reduction in FLOPs, a 58.1% decrease in memory footprint, and a 28.1% improvement in throughput on average. This led to a reduction in CO2 emissions by up to 44.85%. Importantly, it achieves a reduction in computation resources while maintaining up to 98.1% of the original predictive performance. These findings highlight the potential of ALPINE in making language models of code more resource-efficient and accessible while preserving their performance, contributing to the overall sustainability of their adoption in software development. Also, it sheds light on redundant and noisy information in source code analysis corpora, as shown by the substantial sequence compression achieved by ALPINE.

  • Research Article
  • 10.3847/1538-4357/add724
Cosmology with One Galaxy: Autoencoding the Galaxy Properties Manifold
  • Jun 12, 2025
  • The Astrophysical Journal
  • Amanda Lue + 4 more

Abstract Cosmological simulations like CAMELS and IllustrisTNG characterize hundreds of thousands of galaxies using various internal properties. Previous studies have demonstrated that machine learning can be used to infer the cosmological parameter Ω m from the internal properties of even a single randomly selected simulated galaxy. This ability was hypothesized to originate from galaxies occupying a low-dimensional manifold within a higher-dimensional galaxy property space, which shifts with variations in Ω m . In this work, we investigate how galaxies occupy the high-dimensional galaxy property space, particularly the effect of Ω m and other cosmological and astrophysical parameters on the putative manifold. We achieve this by using an autoencoder with an information-ordered bottleneck, a neural layer with adaptive compression, to perform dimensionality reduction on individual galaxy properties from CAMELS simulations, which are run with various combinations of cosmological and astrophysical parameters. We find that for an autoencoder trained on the fiducial set of parameters, the reconstruction error increases significantly when the test set deviates from fiducial values of Ω m and A SN1, indicating that these parameters shift galaxies off the fiducial manifold. In contrast, variations in other parameters such as σ 8 cause negligible error changes, suggesting galaxies shift along the manifold. These findings provide direct evidence that the ability to infer Ω m from individual galaxies is tied to the way Ω m shifts the manifold. Physically, this implies that parameters like σ 8 produce galaxy property changes resembling natural scatter, while parameters like Ω m and A SN1 create unsampled properties, extending beyond the natural scatter in the fiducial model.

  • Research Article
  • 10.1038/s44159-025-00458-6
Adaptive compression as a unifying framework for episodic and semantic memory
  • Jun 5, 2025
  • Nature Reviews Psychology
  • David G Nagy + 2 more

Adaptive compression as a unifying framework for episodic and semantic memory

  • Research Article
  • 10.1002/sdtp.18256
51‐2: Invited Paper: Touch Sensing and Graphics Processing in MicroIC Displays
  • Jun 1, 2025
  • SID Symposium Digest of Technical Papers
  • I Knausz + 15 more

MicroICs advance display backplane technology by offering superior power efficiency and functionality over TFTs. They integrate CMOS logic for enhanced features, utilize innovative driving schemes, and employ scalable mass transfer techniques. These improvements result in high‐performance displays with excellent optical characteristics and opportunities for smart sensor integration, ideal for portable and wearable devices. MicroIC‐driven displays enhance graphics processing with techniques like color quantization, additive rendering, and adaptive compression. These innovations reduce bandwidth, maximize visual quality, and boost real‐time performance for interactive applications and gaming.

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • .
  • .
  • .
  • 10
  • 1
  • 2
  • 3
  • 4
  • 5

Popular topics

  • Latest Artificial Intelligence papers
  • Latest Nursing papers
  • Latest Psychology Research papers
  • Latest Sociology Research papers
  • Latest Business Research papers
  • Latest Marketing Research papers
  • Latest Social Research papers
  • Latest Education Research papers
  • Latest Accounting Research papers
  • Latest Mental Health papers
  • Latest Economics papers
  • Latest Education Research papers
  • Latest Climate Change Research papers
  • Latest Mathematics Research papers

Most cited papers

  • Most cited Artificial Intelligence papers
  • Most cited Nursing papers
  • Most cited Psychology Research papers
  • Most cited Sociology Research papers
  • Most cited Business Research papers
  • Most cited Marketing Research papers
  • Most cited Social Research papers
  • Most cited Education Research papers
  • Most cited Accounting Research papers
  • Most cited Mental Health papers
  • Most cited Economics papers
  • Most cited Education Research papers
  • Most cited Climate Change Research papers
  • Most cited Mathematics Research papers

Latest papers from journals

  • Scientific Reports latest papers
  • PLOS ONE latest papers
  • Journal of Clinical Oncology latest papers
  • Nature Communications latest papers
  • BMC Geriatrics latest papers
  • Science of The Total Environment latest papers
  • Medical Physics latest papers
  • Cureus latest papers
  • Cancer Research latest papers
  • Chemosphere latest papers
  • International Journal of Advanced Research in Science latest papers
  • Communication and Technology latest papers

Latest papers from institutions

  • Latest research from French National Centre for Scientific Research
  • Latest research from Chinese Academy of Sciences
  • Latest research from Harvard University
  • Latest research from University of Toronto
  • Latest research from University of Michigan
  • Latest research from University College London
  • Latest research from Stanford University
  • Latest research from The University of Tokyo
  • Latest research from Johns Hopkins University
  • Latest research from University of Washington
  • Latest research from University of Oxford
  • Latest research from University of Cambridge

Popular Collections

  • Research on Reduced Inequalities
  • Research on No Poverty
  • Research on Gender Equality
  • Research on Peace Justice & Strong Institutions
  • Research on Affordable & Clean Energy
  • Research on Quality Education
  • Research on Clean Water & Sanitation
  • Research on COVID-19
  • Research on Monkeypox
  • Research on Medical Specialties
  • Research on Climate Justice
Discovery logo
FacebookTwitterLinkedinInstagram

Download the FREE App

  • Play store Link
  • App store Link
  • Scan QR code to download FREE App

    Scan to download FREE App

  • Google PlayApp Store
FacebookTwitterTwitterInstagram
  • Universities & Institutions
  • Publishers
  • R Discovery PrimeNew
  • Ask R Discovery
  • Blog
  • Accessibility
  • Topics
  • Journals
  • Open Access Papers
  • Year-wise Publications
  • Recently published papers
  • Pre prints
  • Questions
  • FAQs
  • Contact us
Lead the way for us

Your insights are needed to transform us into a better research content provider for researchers.

Share your feedback here.

FacebookTwitterLinkedinInstagram
Cactus Communications logo

Copyright 2026 Cactus Communications. All rights reserved.

Privacy PolicyCookies PolicyTerms of UseCareers