• All Solutions All Solutions Caret
    • Editage

      One platform for all researcher needs

    • Paperpal

      AI-powered academic writing assistant

    • R Discovery

      Your #1 AI companion for literature search

    • Mind the Graph

      AI tool for graphics, illustrations, and artwork

    • Journal finder

      AI-powered journal recommender

    Unlock unlimited use of all AI tools with the Editage Plus membership.

    Explore Editage Plus
  • Support All Solutions Support
    discovery@researcher.life
Discovery Logo
Sign In
Paper
Search Paper
Cancel
Pricing Sign In
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Chat PDF iconChat PDF Star Left icon
  • Citation Generator iconCitation Generator
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link
Discovery Logo menuClose menu
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Chat PDF iconChat PDF Star Left icon
  • Citation Generator iconCitation Generator
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link

Related Topics

  • Code Optimization
  • Code Optimization

Articles published on Code size

Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
1183 Search results
Sort by
Recency
  • New
  • Research Article
  • 10.1142/s0218194025501086
HVDet: Heap Vulnerability Detection Method based on P-PDG representation and Bi-GRU algorithm
  • Dec 24, 2025
  • International Journal of Software Engineering and Knowledge Engineering
  • Rong Ren + 5 more

Heap vulnerabilities pose a significant risk to software, leading to stability issues such as slowdown and resource depletion. These vulnerabilities can potentially disrupt critical operations and compromise the overall system performance, especially in the case of automated control systems implemented in C/C++ language. While various artificial intelligence-based detection methods have been studied, there has been limited analysis of the detection process and the structural and semantic features, resulting in lower detection efficiency. This paper proposes a novel heap vulnerability detection (HVDet) method based on the Pointer Program Dependency Graph (P-PDG) representation and Bidirectional Gated Recurrent Unit (Bi-GRU) algorithm for software. Through inter-procedural analysis, the P-PDG serves as an innovative code representation model that places emphasis on pointer operations, which are closely associated with heap vulnerabilities. It leads to a reduction in code size while simultaneously capturing a broader range of structural and semantic features of the source code. Subsequently, a mixed feature matrix incorporating these features from code slices is generated as input for the Bi-GRU algorithm. When compared with 7 state-of-the-art (SOTA) vulnerability detection tools, HVDet demonstrates superior performance. It successfully identified three heap vulnerabilities in real-world software such as Linux Kernel, Espruino, and LibreDWG.

  • Research Article
  • 10.1145/3776753
Lightweight Code Outlining for Android Applications
  • Dec 16, 2025
  • ACM Transactions on Architecture and Code Optimization
  • Shuo Jiang + 8 more

Android employs Ahead-of-Time (AOT) pre-compilation to enhance application launch speed and runtime performance. However, these generated OAT files over-consume the scarce memory and storage resources on mobile devices, leading to degraded user experience. Our analysis of several Android applications reveals an average code redundancy of 25.4%. To reduce the code size via redundancy elimination, we propose Calibro , a C ompilation- a ssisted li nk-time b ina r y code o utlining method. However, it will incur high build overhead, so we introduce several optimizations to better suit resource-limited mobile devices. Besides, we also propose optional filtering strategies to further meet performance requirements. Experimental results show that, under common scenarios, our method reduces the OAT file code size by 19.6% and the runtime memory usage by 15.4% on average, with negligible runtime performance degradation in terms of user experience and tolerable build overhead. Therefore, the proposed method shows promise for industrial deployment on real-world Android mobile devices.

  • Research Article
  • 10.3390/e27121245
On Best Erasure Wiretap Codes: Equivocation Matrices and Design Principles
  • Dec 9, 2025
  • Entropy
  • Willie K Harrison + 3 more

Physical-layer security can aid in establishing secure telecommunication networks including cellular, Internet of Things, and telemetry networks, among others. Channel sounding techniques and/or telemetry systems for reporting channel conditions, coupled with superior wiretap code design are necessary to implement such secure systems. In this paper, we present recent results in best wiretap coset code design for the binary erasure wiretap channel. We define equivocation matrices, and showcase their properties and utility in constructing good, and even the best, wiretap codes. We outline the notion of equivalence for wiretap coset codes, and use it to reduce the search space in exhaustive searches for best small codes. Through example, we show that the best codes do not exist for some code sizes. We also prove that simplex codes are better than codes repeating one column multiple times in their generator matrix.

  • Research Article
  • 10.70389/pjs.100173
The Task of Mathematical Modelling Using a Programming Language: A Scoping Review
  • Nov 20, 2025
  • Premier Journal of Science
  • Aliima Mamatkasymova + 4 more

BACKGROUND This study aims to explore the role of programming languages in the development and implementation of mathematical models, with a focus on the integration of advanced computing technologies. MATERIALS AND METHODS Utilising a narrative review method, the study methodically examines the body of research on mathematical modelling and the use of programming languages like Python, C++, and Julia. The performance of these languages is compared in a number of mathematical modelling tasks, such as numerical methods, linear algebra, and physical modelling. RESULTS The paper emphasises how cloud computing, artificial intelligence, and hybrid algorithms have significantly improved the precision and effectiveness of mathematical models. While C++ offers great performance in computationally demanding jobs but necessitates more development effort, Python has been demonstrated to be beneficial for speedy development because of its vast library ecosystem. Julia is a promising language for mathematical modelling because it strikes a compromise between usability and performance. The investigation also shows that the choice of computing methods and programming languages has a significant impact on the effectiveness of mathematical models. Every language offers advantages based on the particular modelling task, as shown by a thorough analysis of execution time, memory utilisation, and code size. Furthermore, the combination of quantum computing and machine learning offers fresh possibilities for resolving increasingly challenging issues that conventional approaches are unable to effectively handle. CONCLUSION According to the study’s findings, mathematical modelling will depend more and more on the cooperation of traditional approaches, contemporary programming languages, and cutting-edge technologies like artificial intelligence and quantum computing.

  • Research Article
  • 10.12732/ijam.v38i11s.1286
TOWARDS AUTONOMOUS CODE OPTIMIZATION: A REINFORCEMENT LEARNING FRAMEWORK FOR COMPILER DESIGN
  • Nov 2, 2025
  • International Journal of Applied Mathematics
  • S.Venkatesan

Modern compiler optimization remains one of the most challenging problems in computer systems research. Conventional compiler pipelines rely on static, manually crafted heuristics to determine optimization passes, instruction scheduling, and register allocation. However, as software complexity and hardware heterogeneity increase, these heuristics struggle to generalize across workloads, architectures, and programming paradigms. This paper proposes an autonomous reinforcement learning (RL) framework for compiler design, in which optimization pass selection and parameter tuning are treated as sequential decision-making tasks. The proposed system formulates compiler optimization as a Markov Decision Process (MDP), where the state represents the intermediate representation (IR) of code, the actions correspond to possible optimization passes, and the reward is derived from performance improvements such as reduced execution time or binary size. A Graph Neural Network (GNN) encoder captures structural information from IR graphs, while a deep reinforcement learning agent (e.g., PPO or DQN) learns optimization policies that generalize across programs and architectures. The framework integrates with the LLVM and MLIR compiler infrastructures and is evaluated on benchmark suites including SPEC CPU2017 and PolyBench. Experimental results indicate up to 35% performance improvement over standard -O3 optimization levels and 20% reduction in code size without compromising compilation time. Ablation studies confirm that GNN-based state encoding and multi-objective reward shaping are essential to policy stability and cross-architecture generalization. This study contributes a modular, scalable approach to autonomous code optimization, bridging the gap between classical compiler theory and data-driven decision systems. The paper concludes with open challenges in interpretability, real-time adaptation, and integration with differentiable compiler toolchains.

  • Research Article
  • 10.1109/tvcg.2025.3627171
Reimagining Disassembly Interfaces with Visualization: Combining Instruction Tracing and Control Flow with DisViz.
  • Oct 31, 2025
  • IEEE transactions on visualization and computer graphics
  • Shadmaan Hye + 2 more

In applications where efficiency is critical, developers may examine their compiled binaries, seeking to understand how the compiler transformed their source code and what performance implications that transformation may have. This analysis is challenging due to the vast number of disassembled binary instructions and the many-to-many mappings between them and the source code. These problems are exacerbated as source code size increases, giving the compiler more freedom to map and disperse binary instructions across the disassembly space. Interfaces for disassembly typically display instructions as an unstructured listing or sacrifice the order of execution. We design a new visual interface for disassembly code that combines execution order with control flow structure, enabling analysts to both trace through code and identify familiar aspects of the computation. Central to our approach is a novel layout of instructions grouped into basic blocks that displays a looping structure in an intuitive way. We add to this disassembly representation a unique block-based mini-map that leverages our layout and shows context across thousands of disassembly instructions. Finally, we embed our disassembly visualization in a web-based tool, DisViz, which adds dynamic linking with source code across the entire application. DizViz was developed in collaboration with program analysis experts following design study methodology and was validated through evalu ation sessions with ten participants from four institutions. Participants successfully completed the evaluation tasks, hypothesized about compiler optimizations, and noted the utility of our new disassembly view. Our evaluation suggests that our new integrated view helps application developers in understanding and navigating disassembly code.

  • Research Article
  • 10.3390/e27111101
A New Lower Bound for Noisy Permutation Channels via Divergence Packing.
  • Oct 25, 2025
  • Entropy (Basel, Switzerland)
  • Lugaoze Feng + 3 more

Noisy permutation channels are applied in modeling biological storage systems and communication networks. For noisy permutation channels with strictly positive and full-rank square matrices, new achievability bounds are given in this paper, which are tighter than existing bounds. To derive this bound, we use the ϵ-packing with Kullback-Leibler divergence as a distance and introduce a novel way to illustrate the overlapping relationship of error events. This new bound shows analytically that for such a matrix W, the logarithm of the achievable code size with a given block n and error probability ϵ is closely approximated by ℓlogn-Φ-1(ϵ/G)+logV(W), where ℓ=rank(W)-1, G=2ℓ+12, and V(W) is a characteristic of the channel referred to as channel volume ratio. Our numerical results show that the new achievability bound significantly improves the lower bound of channel coding. Additionally, the Gaussian approximation can replace the complex computations of the new achievability bound over a wide range of relevant parameters.

  • Research Article
  • Cite Count Icon 1
  • 10.1103/2bp8-cdxc
Measurement-Based Entanglement Distillation and Constant-Rate Quantum Repeaters over Arbitrary Distances.
  • Sep 26, 2025
  • Physical review letters
  • Yu Shi + 2 more

Measurement-based quantum repeaters employ entanglement distillation and swapping across links using locally prepared resource states of minimal size and local Bell measurements. In this Letter, we introduce a systematic protocol for measurement-based entanglement distillation and its application to repeaters that can leverage any stabilizer code. Given a code, we explicitly define the corresponding resource state and derive an error-recovery operation based on all Bell measurement outcomes. Our approach offers deeper insights into the impact of resource state noise on repeater performance while also providing strategies for efficient preparation and fault-tolerant preservation of resource states. As an application, we propose a measurement-based repeater protocol based on quantum low-density parity-check (QLDPC) codes, enabling constant-yield Bell state distribution over arbitrary distances. Numerical simulations identify a fault-tolerant threshold on the total physical error per repeater segment-including errors on resource states, remotely generated Bell states, and Bell measurements-and confirm that increasing the QLDPC code size further suppresses the logical error while maintaining a fixed encoding rate. This Letter establishes a scalable backbone for future global-scale fault-tolerant quantum networks.

  • Research Article
  • Cite Count Icon 1
  • 10.4153/s0008414x25101600
The Eigenvalue Method in coding theory
  • Sep 22, 2025
  • Canadian Journal of Mathematics
  • Aida Abiad + 2 more

Abstract We lay down the foundations of the Eigenvalue Method in coding theory. The method uses modern algebraic graph theory to derive upper bounds on the size of error-correcting codes for various metrics, addressing major open questions in the field. We identify the core assumptions that allow applying the Eigenvalue Method, test it for multiple well-known classes of error-correcting codes, and compare the results with the best bounds currently available. By applying the Eigenvalue Method, we obtain new bounds on the size of error-correcting codes that often improve the state of the art. Our results show that spectral graph theory techniques capture structural properties of error-correcting codes that are missed by classical coding theory approaches.

  • Research Article
  • Cite Count Icon 1
  • 10.22331/q-2025-07-22-1804
Bounds on Autonomous Quantum Error Correction
  • Jul 22, 2025
  • Quantum
  • Oles Shtanko + 4 more

Autonomous quantum memories are a way to passively protect quantum information using engineered dissipation that creates an “always-on'' decoder. We analyze Markovian autonomous decoders that can be implemented with a wide range of qubit and bosonic error-correcting codes, and derive several upper bounds and a lower bound on the logical error rate in terms of correction and noise rates. These bounds suggest that, in general, there is always a correction rate, possibly size-dependent, above which autonomous memories exhibit arbitrarily long coherence times. For any given autonomous memory, size dependence of this correction rate is difficult to rule out: we point to common scenarios where autonomous decoders that stochastically implement active error correction must operate at rates that grow with code size. For codes with a threshold, we show that it is possible to achieve faster-than-polynomial decay of the logical error rate with code size by using superlogarithmic scaling of the correction rate. We illustrate our results with several examples. One example is an exactly solvable global dissipative toric code model that can achieve an effective logical error rate that decreases exponentially with the linear lattice size, provided that the recovery rate grows proportionally with the linear lattice size.

  • Research Article
  • 10.1177/00220345251344548
AI in Learning Anatomy and Restoring Central Incisors: A Comparative Study
  • Jul 2, 2025
  • Journal of Dental Research
  • P Binvignat + 7 more

More than 1 billion individuals worldwide have experienced dental trauma, particularly children aged 7 to 12 y, predominantly affecting the anterior teeth, which has a significant impact on oral health and esthetics. Rapid emergency restorations using composite resin are followed by medium-term lab-fabricated mock-ups. Recent advancements in artificial intelligence (AI) assist dental restorations, and the objective of this study was to compare the performances of different AI approaches for the learning and reconstruction of central incisors. The study was approved by ethical committees and followed AI in dentistry recommendations. STL files of mature permanent maxillary incisors without severe wear were collected from 3 universities. Principal component analysis (PCA) and Deep Learning of Signed Distance Functions (DeepSDF) models were trained using these files. The learning of PCA and DeepSDF approaches were 3-fold cross-validated, and their performances were assessed using the following metrics to measure the reconstruction accuracy: the difference of surfaces, volumes, lengths, average Euclidian distance, Hausdorff distance, and crown–root angulations. Explainability was assessed using feature contribution analysis for PCA and Stochastic Neighbor Embedding (t-SNE) for DeepSDF. DeepSDF showed significantly better precision in surface, volume, and Hausdorff distance metrics compared with PCA. For reconstructions, the lower size of the latent code of the DeepSDF model demonstrated lower performances compared with higher sizes. In addition, DeepSDF raised concerns about explainability. This study demonstrates the potential of PCA and DeepSDF approaches, particularly DeepSDF, for the learning and reconstruction of the anatomy of upper central incisors. To foster trust and acceptance, future research should, however, focus on improving the explainability of DeepSDF models and considering a broader range of factors that influence smile design. These high performances suggest potential clinical applications, such as assisting practitioners in future smile designs and oral rehabilitation using AI approaches.

  • Research Article
  • Cite Count Icon 1
  • 10.22331/q-2025-06-12-1767
Optimal number of stabilizer measurement rounds in an idling surface code patch
  • Jun 12, 2025
  • Quantum
  • Áron Márton + 1 more

Logical qubits can be protected against environmental noise by encoding them into a highly entangled state of many physical qubits and actively intervening in the dynamics with stabilizer measurements. In this work, we numerically optimize the rate of these interventions: the number of stabilizer measurement rounds for a logical qubit encoded in a surface code patch and idling for a given time. We model the environmental noise on the circuit level, including gate errors, readout errors, amplitude and phase damping. We find, qualitatively, that the optimal number of stabilizer measurement rounds is getting smaller for better qubits and getting larger for better gates or larger code sizes. We discuss the implications of our results to some of the leading architectures, superconducting qubits, and neutral atoms.

  • Research Article
  • 10.1145/3729266
Link-Time Optimization of Dynamic Casts in C++ Programs
  • Jun 10, 2025
  • Proceedings of the ACM on Programming Languages
  • Xufan Lu + 1 more

A core design principle of C++ is that users should only incur costs for features they actually use, both in terms of performance and code size. A notable exception to this rule is the run-time type information (RTTI) data, used for dynamic downcasts, exceptions, and run-time type introspection. For classes that define at least one virtual method, compilers generate RTTI data that uniquely identifies the type, including a string for the type name. In large programs with complex type inheritance hierarchies, this RTTI data can grow substantially in size. Moreover, dynamic casting algorithms are linear in the type hierarchy size, causing some programs to spend considerable time on these casts. The common workaround is to use the -fno-rtti compiler flag, which disables RTTI data generation. However, this approach has significant drawbacks, such as disabling polymorphic exceptions and dynamic casts, and requiring the flag to be applied across the entire program due to ABI changes. In this paper, we propose a new link-time optimization to mitigate both the performance and size overhead associated with dynamic casts and RTTI data. Our optimization replaces costly library calls for downcasts with short instruction sequences and eliminates unnecessary RTTI data by modifying vtables to remove RTTI slots. Our prototype, implemented in the LLVM compiler, demonstrates an average speedup of 1.4

  • Research Article
  • 10.37934/arca.39.1.110
Analysis of Lossless Compression in Huffman Coding and Lempel-Ziv-Welch (LZW)
  • Jun 4, 2025
  • Journal of Advanced Research in Computing and Applications
  • Puteri Nurul’Ain Adil Md Sabri + 2 more

Two-dimensional barcodes called Quick Response codes (QR codes) are commonly used to store data, such as URLs, contact details, and product details. As a means of information sharing, they are growing in popularity due to their ability to store large amounts of data in a small footprint. However, the increasing demand for data storage necessitates larger QR code sizes, potentially impacting readability and scanning efficiency. This study looks into how to use lossless compression techniques like Huffman Coding and Lempel-Ziv-Welch (LZW) to make QR codes store more information without losing any of their accuracy. According to the tests, LZW had an average compression ratio of 35%, and Huffman coding had 40%. This demonstrated that both approaches could considerably compress QR code sizes. Furthermore, both methods ensured reliability by maintaining 100% data integrity after decoding. The results show that adding lossless compression to QR codes makes them work better, which means they can be used to store more data in smaller spaces. This research provides a foundation for further advancements in QR code optimisation, particularly in multi-layered and multicoloured QR code systems.

  • Research Article
  • Cite Count Icon 1
  • 10.1109/tit.2025.3561119
Improved Bounds on the Size of Permutation Codes Under Kendall τ -Metric
  • Jun 1, 2025
  • IEEE Transactions on Information Theory
  • Farzad Parvaresh + 5 more

Improved Bounds on the Size of Permutation Codes Under Kendall τ -Metric

  • Research Article
  • 10.7546/crabs.2025.05.02
An upper bound on the size of a binary code with $$s$$ distances
  • May 28, 2025
  • Proceedings of the Bulgarian Academy of Sciences
  • Ivan Landjev + 1 more

Let $$C$$ be a binary code of length $$n$$ with distances $$0<d_1<\cdots<d_s\le n$$. In this note we prove a general upper bound on the size of $$C$$ without any restriction on the distances $$d_i$$. The bound is asymptotically optimal.

  • Research Article
  • Cite Count Icon 7
  • 10.1038/s41586-025-09061-4
Scaling and logic in the colour code on a superconducting quantum processor.
  • May 26, 2025
  • Nature
  • N Lacroix + 99 more

Quantum error correction1-4 is essential for bridging the gap between the error rates of physical devices and the extremely low error rates required for quantum algorithms. Recent error-correction demonstrations on superconducting processors5-8 have focused primarily on the surface code9, which offers a high error threshold but poses limitations for logical operations. The colour code10 enables more efficient logic, but it requires more complex stabilizer measurements and decoding. Measuring these stabilizers in planar architectures such as superconducting qubits is challenging, and realizations of colour codes11-19 have not addressed performance scaling with code size on any platform. Here we present a comprehensive demonstration of the colour code on a superconducting processor8. Scaling the code distance from three to five suppresses logical errors by a factor of Λ3/5 = 1.56(4). Simulations indicate this performance is below the threshold of the colour code, and the colour code may become more efficient than the surface code following modest device improvements. We test transversal Clifford gates with logical randomized benchmarking20 and inject magic states21, a key resource for universal computation, achieving fidelities exceeding 99% with post-selection. Finally, we teleport logical states between colour codes using lattice surgery22. This work establishes the colour code as a compelling research direction to realize fault-tolerant quantum computation on superconducting processors in the near future.

  • Research Article
  • 10.1002/spe.3430
Concurrency Contracts for Designing Highly Available Replicated Data Types
  • May 24, 2025
  • Software: Practice and Experience
  • Kevin De Porre + 2 more

ABSTRACTIntroductionDistributed system programmers rely on Replicated Data Types (RDTs), which resemble sequential data types but incorporate conflict resolution strategies to guarantee convergence when conflicts occur. The semantics of RDTs depend on the underlying conflict resolution strategy, but these cannot be customized. Moreover, ensuring state convergence alone is not enough because the resulting state may break application‐specific invariants. Although some approaches support application‐level invariants atop existing RDTs, they do not help build the RDT in the first place. As a result, custom RDTs are implemented using ad hoc approaches, which are known to be error‐prone and result in brittle systems. We previously proposed Explicitly Consistent Replicated Objects (ECROs) to address these issues, enabling programmers to build custom RDTs by augmenting sequential data types with a distributed specification. However, the specification requires a complete first‐order logic formalization of the data type and its operations, which is hard to develop. Furthermore, subtle errors in the specification may result in runtime anomalies such as state divergence and broken invariants.MethodsTo tackle these problems, we combine the ECRO programming model with automated program verification. The result is EFx, a minimalist object‐oriented programming language whose core consists of a contract system that simplifies the development of RDTs. EFx does not require tedious first‐order logic specifications because it analyses the data type's implementation, thereby preventing runtime anomalies due to errors in the specification.ResultsWe reconstruct the original portfolio of ECROs in EFx to validate our approach. We consistently achieve a 2x to 4x reduction of the code size. Additionally, we implement several applications, such as the RUBiS auction system, the SmallBank benchmark, a distributed voting game, and an airline reservation system.ConclusionOur evaluation shows that EFx simplifies the development of RDTs.

  • Research Article
  • 10.5753/jisa.2025.4996
Syntactic and Semantic Edge Interoperability
  • May 22, 2025
  • Journal of Internet Services and Applications
  • Tanzima Azad + 3 more

The Internet of Things (IoT) has transformed various sectors, from home automation to healthcare, leveraging a multitude of sensors and actuators communicating through cloud, fog, and edge networks. However, the diversity in device manufacturing and communication protocols necessitates interoperable communication interfaces. Most existing IoT interoperability solutions often rely on cloud-based centralised architectures and suffer from latency and scalability issues. This work specifically focuses on scenarios where decisions need to be made with IoT edge devices in real-time, even in situations where there might be internet disruptions, low bandwidth, or no internet connection. While typical IoT interoperability solutions support edge devices, their reliance on cloud-based architectures makes them unsuitable for mission-critical applications, environmental monitoring, or water quality monitoring, where internet connectivity cannot be guaranteed. To tackle these challenges, the project InterEdge proposed a theoretical interoperability model supporting hierarchical decentralised communication between edge devices. The aforementioned framework has four levels to handle network, syntactic, semantic, and organisational aspects of interoperability. As part of the same project, this work focuses on the implementation of the syntactic and semantic levels of the aforementioned framework. This work involves tackling the implementation challenges, particularly considering key issues related to transmission latency and memory requirements. We have created profiles for edge devices and data formats to store their essential and extra information. Using the profiles, communications can be established and maintained seamlessly among edge devices. We have conducted a comparative analysis between InterEdge implementation and three other implementations of established open standards. The experimental results demonstrate that the syntactic and semantic levels of the implemented interoperability solution, InterEdge, significantly outperforms the existing open standards in terms of standard benchmarking metrics such as code size, memory usage, and response latency. The contribution of this paper lies in these implementation results, which provide concrete evidence of the superior performance of our proposed solution, InterEdge, thereby validating its efficacy in real-world IoT scenarios.

  • Open Access Icon
  • Research Article
  • 10.1007/s10623-025-01634-8
More on codes for combinatorial composite DNA
  • May 15, 2025
  • Designs, Codes and Cryptography
  • Zuo Ye + 4 more

Abstract In this paper, we focus on constructing unique-decodable and list-decodable codes for the recently studied (t, e)-composite-asymmetric error-correcting codes ((t, e)-CAECCs). Let $$\mathcal {X}$$ X be an $$m \times n$$ m × n binary matrix in which each row has Hamming weight w. If at most t rows of $$\mathcal {X}$$ X contain errors, and in each erroneous row, there are at most e occurrences of $$1 \rightarrow 0$$ 1 → 0 errors, we say that a (t, e)-composite-asymmetric error occurs in $$\mathcal {X}$$ X . For general values of m, n, w, t, and e, we propose new constructions of (t, e)-CAECCs with redundancy at most $$(t-1)\log (m) + O(1)$$ ( t - 1 ) log ( m ) + O ( 1 ) , where O(1) is independent of the code length m. In particular, this yields a class of (2, e)-CAECCs that are optimal in terms of redundancy. When m is a prime power, the redundancy can be further reduced to $$(t-1)\log (m) - O(\log (m))$$ ( t - 1 ) log ( m ) - O ( log ( m ) ) . To further increase the code size, we introduce a combinatorial object called a weak $$B_e$$ B e -set. When $$e = w$$ e = w , we present an efficient encoding and decoding method for our codes. Finally, we explore potential improvements by relaxing the requirement of unique decoding to list-decoding. We show that when the list size is t! or an exponential function of t, there exist list-decodable (t, e)-CAECCs with constant redundancy. When the list size is two, we construct list-decodable (3, 2)-CAECCs with redundancy $$\log (m) + O(1)$$ log ( m ) + O ( 1 ) .

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • .
  • .
  • .
  • 10
  • 1
  • 2
  • 3
  • 4
  • 5

Popular topics

  • Latest Artificial Intelligence papers
  • Latest Nursing papers
  • Latest Psychology Research papers
  • Latest Sociology Research papers
  • Latest Business Research papers
  • Latest Marketing Research papers
  • Latest Social Research papers
  • Latest Education Research papers
  • Latest Accounting Research papers
  • Latest Mental Health papers
  • Latest Economics papers
  • Latest Education Research papers
  • Latest Climate Change Research papers
  • Latest Mathematics Research papers

Most cited papers

  • Most cited Artificial Intelligence papers
  • Most cited Nursing papers
  • Most cited Psychology Research papers
  • Most cited Sociology Research papers
  • Most cited Business Research papers
  • Most cited Marketing Research papers
  • Most cited Social Research papers
  • Most cited Education Research papers
  • Most cited Accounting Research papers
  • Most cited Mental Health papers
  • Most cited Economics papers
  • Most cited Education Research papers
  • Most cited Climate Change Research papers
  • Most cited Mathematics Research papers

Latest papers from journals

  • Scientific Reports latest papers
  • PLOS ONE latest papers
  • Journal of Clinical Oncology latest papers
  • Nature Communications latest papers
  • BMC Geriatrics latest papers
  • Science of The Total Environment latest papers
  • Medical Physics latest papers
  • Cureus latest papers
  • Cancer Research latest papers
  • Chemosphere latest papers
  • International Journal of Advanced Research in Science latest papers
  • Communication and Technology latest papers

Latest papers from institutions

  • Latest research from French National Centre for Scientific Research
  • Latest research from Chinese Academy of Sciences
  • Latest research from Harvard University
  • Latest research from University of Toronto
  • Latest research from University of Michigan
  • Latest research from University College London
  • Latest research from Stanford University
  • Latest research from The University of Tokyo
  • Latest research from Johns Hopkins University
  • Latest research from University of Washington
  • Latest research from University of Oxford
  • Latest research from University of Cambridge

Popular Collections

  • Research on Reduced Inequalities
  • Research on No Poverty
  • Research on Gender Equality
  • Research on Peace Justice & Strong Institutions
  • Research on Affordable & Clean Energy
  • Research on Quality Education
  • Research on Clean Water & Sanitation
  • Research on COVID-19
  • Research on Monkeypox
  • Research on Medical Specialties
  • Research on Climate Justice
Discovery logo
FacebookTwitterLinkedinInstagram

Download the FREE App

  • Play store Link
  • App store Link
  • Scan QR code to download FREE App

    Scan to download FREE App

  • Google PlayApp Store
FacebookTwitterTwitterInstagram
  • Universities & Institutions
  • Publishers
  • R Discovery PrimeNew
  • Ask R Discovery
  • Blog
  • Accessibility
  • Topics
  • Journals
  • Open Access Papers
  • Year-wise Publications
  • Recently published papers
  • Pre prints
  • Questions
  • FAQs
  • Contact us
Lead the way for us

Your insights are needed to transform us into a better research content provider for researchers.

Share your feedback here.

FacebookTwitterLinkedinInstagram
Cactus Communications logo

Copyright 2026 Cactus Communications. All rights reserved.

Privacy PolicyCookies PolicyTerms of UseCareers