Articles published on Coding theory
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
2079 Search results
Sort by Recency
- Research Article
- 10.1080/14643154.2026.2635251
- Mar 4, 2026
- Deafness & Education International
- Lalithavinodini Kunnath Chalil + 1 more
ABSTRACT Vocabulary acquisition remains a major challenge for students who are deaf or hard of hearing (DHH), affecting literacy development and academic success. Although recent advances in the Global South have improved receptive vocabulary, a critical transfer gap remains: larger vocabularies do not automatically produce fluent reading or comprehension. This study examined the effectiveness of a pedagogical intervention grounded in Dual Coding Theory and the Social Model of Disability, interpreted through a translanguaging lens. Using a quasi-experimental one-group pre-test-post-test design, we worked with 30 DHH students in grades 1-5 from two specialised schools in Kerala, India. Over three weeks, the intervention used Explicit Chaining and Sandwiching strategies, implemented through flashcards and blackboard instruction, to build direct non-phonological links between 11 Dolch sight words and their meanings through Indian Sign Language (ISL). Repeated-measures ANOVA showed significant improvement in vocabulary scores from pre-test (M = 2.10) to intermediate test (M = 4.77) and post-test (M = 9.43), F(0.73, 50.22) = 183.47, p < .001, with a large effect size (partial η² = .86; d ≈ 4.96). The findings suggest that visually mediated orthographic mapping may support vocabulary learning in DHH learners. The study supports bilingual-bimodal chaining strategies as an effective approach for building foundational English vocabulary and promoting academic inclusion in low-resource contexts.
- Research Article
- 10.1177/10776958251407389
- Feb 14, 2026
- Journalism & Mass Communication Educator
- Cindy Royal
This study explicates and explores the concept of “vibe coding,” or the use of AI platforms to teach relevant programming skills in a communication context. By integrating flow theory, the research presents a student-centered model for AI-augmented coding education, applying flow traits to achieve desired outcomes. Using student feedback from a mobile application development course, the study demonstrates how AI tools can enhance coding education by providing code samples, explanations, and troubleshooting assistance. The findings suggest that vibe coding can make coding more accessible and useful to a broader audience by applying the model in digital development education and practices.
- Research Article
- 10.55606/jurribah.v5i1.8330
- Feb 3, 2026
- Jurnal Riset Rumpun Ilmu Bahasa
- Gadis Artika + 3 more
Data was collected through the listening method with the recording technique, then analyzed using mixed code theory in sociolinguistic studies. Data was analyzed using mixed code theory in sociolinguistic studies to reveal language usage patterns in the context of digital media. The research results show that code mixing serves as a multidimensional communicative strategy. First, mixing code is used to express emotions more effectively and authentically. Second, it serves to clarify the meaning of psychological concepts that do not have the right equivalent in Indonesian. Third, build closeness with an audience that has a bilingual background. Fourth, constructing a bilingual identity of speakers that reflects the social reality of Indonesian urban society. This research contributes to the understanding of the language practice of the Indonesian bilingual community in the digital era, especially in the delivery of personal and sensitive issues such as mental health, which requires a flexible and relatable communication strategy for millennial audiences and generation Z.
- Research Article
- 10.11591/ijai.v15.i1.pp547-558
- Feb 1, 2026
- IAES International Journal of Artificial Intelligence (IJ-AI)
- Imrane Chemseddine Idrissi + 4 more
The decoding of error-correcting codes (ECCs) is a critical aspect of communication systems, yet traditional decoding techniques can often be computationally demanding or ineffective for certain codes, necessitating innovative approaches. In this study, we introduce a hybrid approach that combines machine learning and automorphism techniques to optimize the decoding process. Specifically, we train multilayer perceptron (MLP) models to learn the mapping between error syndromes and their corresponding errors. While these models exhibit robust learning capabilities, their performance sometimes does not reach 100%. To mitigate this limitation, we exploit the automorphism group of the code—a set of structure-preserving transformations—to convert the errors that the MLP struggles to decode into ones it can process more effectively. We use a minimum number of p permutations, pre-calculating and storing all possible automorphisms to ensure computational efficiency. Our experimental results reveal that this hybrid approach substantially enhances the decoding performance of the MLP model, presenting a promising avenue for decoding ECCs. Importantly, this approach is not limited to MLP models and can be applied to any machine learning model with a learning score less than 100%, broadening its applicability and impact. By integrating machine learning with traditional algebraic coding theory, we propose a new paradigm that holds the potential to revolutionize the design of decoding systems, making them more efficient and effective.
- Research Article
- 10.1016/j.radi.2025.103304
- Feb 1, 2026
- Radiography (London, England : 1995)
- V Daries + 2 more
Critique of cervical spine radiographs among diagnostic radiography students through the lens of Semantics, a dimension of the Legitimation Code Theory.
- Research Article
- 10.30574/gjeta.2026.26.1.0008
- Jan 31, 2026
- Global Journal of Engineering and Technology Advances
- Elrhouat Oussama
The shift to online learning has presented a formidable challenge for experimental science education, especially the quality of lab-based instruction. Standard videos place students in a passive role and weaken inquiry and hands-on reasoning. This article studies interactive videos as a way to support active experimental learning online. Interactive elements include short tasks, clickable notes, and instant feedback during viewing. The study draws on Cognitive Load Theory, Dual Coding Theory, and Constructivism. • It introduces the DIVE Model (Designing Interactive Videos for Experimentation). • The model follows six clear phases for design, production, and classroom use. • It turns video viewing into a guided exploration that reflects real scientific practice. • The DIVE Model supports strong experimental learning in online settings at scale.
- Research Article
- 10.1080/1463922x.2026.2622021
- Jan 28, 2026
- Theoretical Issues in Ergonomics Science
- Eloise Minder + 4 more
Virtual Reality (VR) has the potential to provide qualitative social interactions. The literature explores the Social Presence and Co-presence notions, which we propose to pursue with the definition of the Koinos concept, to name the perceived quality of social interactions in a mediated environment. This definition derives from the Predictive Coding Theory and Qualia Theory, which takes into account an individual’s own subjective feelings. In a second time, we propose the Koinos model, which represents the dynamics of social interactions in VR, and can be used as a tool to predict the Koinos generated by a specific VR configuration. The model reflects the mediated message transmitted during social interactions in VR, taking into account the common ground between interlocutors.
- Research Article
- 10.4108/eetpht.11.11044
- Jan 13, 2026
- EAI Endorsed Transactions on Pervasive Health and Technology
- Caihong He + 3 more
INTRODUCTION: Insomnia Disorder is a global public health problem. Cognitive behavioral therapy for insomnia (CBT-I), as the gold standard for combating insomnia, still has limitations such as low patient adherence and inability to directly intervene in physiological hyperarousal. Traditional sensory interventions lack precise, mechanism-driven designs, making it difficult to effectively suppress this hyperarousal. This study aims to address these limitations by developing a non-pharmacological intervention based on predictive coding theory (PCT) and multi-sensory integration. OBJECTIVES: This study developed a Cross-Modal Digital Health System (CMDH-I) that combines CBT-I principles with personalized, synchronized auditory, visual, and vibro-tactile stimulation, and dynamically modulates the intervention process through a closed-loop control mechanism driven by real-time heart rate variability (HRV) biofeedback. The primary objectives include evaluating the clinical efficacy of CMDH-I combined with CBT-I on objective sleep latency (SL) and subjective sleep quality (PSQI). Furthermore, the study aims to explore the underlying neurophysiological mechanisms, particularly the regulatory role of heart rate variability (HRV-RMSSD) and changes in electroencephalogram (EEG) power spectral density. METHODS: A 6-week, double-blind, three-arm randomized controlled trial (RCT) was conducted on 90 patients with primary insomnia. Participants were randomly assigned to one of three groups: (1) CBT-I + True CMDH-I; (2) CBT-I + Sham CMDH-I (stimulus asynchrony); and (3) CBT-I standard control group. The primary outcomes were objective sleep latency (SL) and subjective sleep quality (PSQI). Secondary outcomes included neurophysiological parameters: electroencephalogram power spectral density (δ/σ wave) and heart rate variability (HRV-RMSSD). RESULTS: The reduction in SL and PSQI scores in the True CMDH-I group was significantly greater than that in the other two groups, exceeding the lowest clinically significant difference (MCID) (p < 0.001). More importantly, mediation analysis showed that the improvement in HRV-RMSSD was one of the main mechanisms by which CMDH-I improved sleep quality, accounting for 58.1% of the total effect. In addition, the increase in frontal lobe EEG delta wave power was closely associated with the increase in HRV-RMSSD (r=0.68), which validated the hypothesis of the vagus-thalamus-cortex pathway proposed in this study. CONCLUSION: CMDH-I is a closed-loop, cross-modal digital health system based on PCT. As a non-pharmacological intervention for sleep disorders, this system outperforms standard CBT-I in clinical efficacy. The research results provide empirical evidence that the system's therapeutic effect is achieved through enhanced parasympathetic activity (increased HRV-RMSSD), thus validating its precise neurophysiological mechanism of sleep regulation. This study establishes a clearly defined digital treatment system, providing objective physiological indicators for personalized sleep medicine and representing a significant advancement
- Research Article
- 10.3269/1970-5492.2018.13.33
- Jan 12, 2026
- EuroMediterranean Biomedical Journal
- Giuseppe Giglia + 1 more
Perception is a complex, neural mechanism that requires organization and interpretation of input meaning and it has been a key topic in medicine, neuroscience and philosophy for centuries. Gestalt psychology proposed that the underlying mechanism is a constructive process that depends on both input of stimuli and the sensory-motor state of the agent The Bayesian Brain hypothesis reframed it as probabilistic inference of previous beliefs, which are revised to accommodate new information. The Predictive Coding Theory proposes that this process iS implemented through a top-down cascade of cortical predictions of lower level input and the concurrent propagation of a bottom-up prediction error aimed at revising higher level expectations. The 'Active Inference' theory explains both perception and action, generalising the prediction error minimisation process. In this focused-review we provide a historical overview of the topic and an intuitive approach to the new computational models.
- Research Article
- 10.64898/2026.01.07.698143
- Jan 8, 2026
- bioRxiv
- Darcy S Peterka + 8 more
Context modulates sensory processing in the cerebral cortex by suppressing responses to expected stimuli and enhancing responses to unexpected ones. Recent proposals argue that early sensory areas such as primary visual cortex (V1) are shaped only by local context, including recent stimulus history, whereas modulation by global context, such as learned temporal structure, is present exclusively in higher cortical areas. This view is incompatible with predictive coding theories. To directly dissociate local and global contextual influences, we used a global/local oddball paradigm in which mice viewed five-item sequences. Across conditions, sequence structure was held constant while stimulus identity and predictability were selectively manipulated, allowing the isolation of response modulations due to local deviance, global expectation, and stimulus repetition independently. In the canonical sequence (AAAA–B), B is locally deviant but globally predictable. Using two-photon calcium imaging and LFP recordings in mouse V1, we found that global predictability abolished context modulation: responses to B were equivalent to those evoked by a random sequence control (e.g., CDEAB). This effect emerged rapidly, after only <10 sequence repetitions, demonstrating fast learning of global structure. When the stimulus was globally deviant, either by replacing B with a novel stimulus (AAAA–C) or by presenting B unpredictably in a standard oddball paradigm, V1 exhibited robust response enhancement. These effects required feedback from anterior cingulate area (ACa), establishing a causal role for higher cortical circuits in conveying global predictions to V1. Strikingly, when an additional A replaced B (AAAA–A), responses were strongly suppressed despite global deviance, indicating that stimulus-specific adaptation may constrain the expression of global prediction error signals in early sensory cortex.
- Research Article
- 10.64898/2026.01.06.697993
- Jan 6, 2026
- bioRxiv
- Adam Hockley + 3 more
SUMMARYContext modulates neural processing of sensory stimuli. Neural responses are suppressed to stimuli that are typical in their context and augmented to stimuli that deviate from their context. The latter has been conceptualized as a “prediction error”, which can serve to enhance the salience, direct attention, or support learning about behaviorally relevant events. Predictive coding theories posit that prediction errors act to signal the difference between internal predictions and actual sensory input, yet most paradigms simultaneously alter both predictions and input, so cannot test for a true difference signal. Increased neural responses to deviants could, instead, encode generalized surprise or augmented bottom-up signaling. Here we compare neural responses to auditory stimuli across oddball paradigm variants. We found that responses of putative excitatory neurons in primary auditory cortex (A1) to auditory deviants contain frequency change information and a memory trace of contextual information. Interestingly, in a fixed-deviant oddball paradigm where predictions are altered but deviant input remains constant, neural response patterns encoded standard-to-deviant frequency difference. These results support the interpretation that A1 deviance detection can be interpreted as a sensory prediction error that represents the difference between prediction and sensory input, a corollary of the predictive coding framework.
- Research Article
- 10.1111/scs.70176
- Jan 6, 2026
- Scandinavian Journal of Caring Sciences
- Anette Lykke Hindhede + 2 more
ABSTRACTAims and ObjectivesThis study aimed to investigate how supervision models may cultivate or constrain different ways of knowing and learning in primary care.Methodological Design and JustificationThe research employed a qualitative methodological design, grounded in Legitimation Code Theory, to gain an in‐depth understanding of the dynamics at play within various supervision models. It aligns with the QRSR guidelines.Ethical Issues and ApprovalEthical considerations were thoroughly addressed, and approval was obtained prior to initiating the study, ensuring participant confidentiality and informed consent.Research Methods, Instruments, and InterventionsThe study utilised qualitative interviews as the primary research method, conducting 18 interviews with a diverse range of healthcare professionals, including leaders, nurses, nursing assistants, physiotherapists, and both internal and external supervision consultants.Outcome MeasuresThe analysis focused on identifying how different supervision models influenced reflective practice and shaped the participants' perceptions regarding the effectiveness and utility of these models.ResultsFindings illustrated the complex interplay of cultivated, social, and trained gazes within healthcare settings, highlighting how different forms of legitimation shape what counts as meaningful understanding and reflective practice.Study LimitationsWhile the study provides valuable insights, it is important to acknowledge limitations related to the heterogeneous nature of the material across the interventions.ConclusionsThe concept of gaze not only elucidates the presuppositions underlying different supervision models but also elucidates how the usefulness of different supervision models is legitimated within practice.
- Research Article
- 10.3934/amc.2025032
- Jan 1, 2026
- Advances in Mathematics of Communications
- Steven T Dougherty
The fundamental alphabets for codes over rings are finite Frobenius rings, largely because their generating character produces MacWilliams relations. We prove that the following are equivalent statements for a finite commutative ring $ R $: (1) $ R $ is Frobenius; (2) $ | {\mathfrak{a}}| | {\mathfrak{a}}^\perp| = |R| $ for all ideals $ {\mathfrak{a}} $ in $ R $; and (3) $ ( {\mathfrak{a}}^\perp)^\perp = {\mathfrak{a}} $ for all ideals $ {\mathfrak{a}} $ in $ R $ and give an algorithmic way of producing the generating character.
- Research Article
2
- 10.1016/j.neures.2025.104990
- Jan 1, 2026
- Neuroscience research
- Yukiko Matsumoto + 2 more
Schizophrenia is characterized by profound semantic impairments that manifest as disrupted language and thought. We provide empirical support for the hypothesis that predictive coding forms a unifying framework for understanding these deficits by reinforcing theoretical ideas with quantitative neuroimaging evidence. According to predictive coding theory, the brain continuously generates predictions about incoming information, and prediction errors drive model updates when expectations diverge from sensory input. This review synthesizes evidence from cognitive neuroscience, computational psychiatry, and neurolinguistics to demonstrate how aberrant prediction error signaling disrupts hierarchical semantic processing in schizophrenia. Behavioral studies have revealed atypical semantic processing in priming and fluency tasks. Electrophysiological studies have shown altered neural responses to semantic incongruence, particularly reduced N400 effects. Furthermore, we have used voxel-wise modeling, graph theory, and topological analysis to demonstrate fundamentally disorganized semantic networks in schizophrenia, characterized by reduced small-worldness, excessive homogenization, and diminished representational variability. These converging findings are consistent with a neurocomputational account wherein semantic deficits reflect disrupted predictive mechanisms. This theoretical framework suggests that miscalibrated precision weighting of prediction errors leads to either over-activation of irrelevant semantic associations or impoverished semantic processing. This perspective offers insights into schizophrenia pathophysiology and guidance for targeted interventions to restore predictive coding function.
- Research Article
- 10.1109/mbits.2025.3633197
- Jan 1, 2026
- IEEE BITS the Information Theory Magazine
- Joshua Brakensiek + 1 more
In the modern era of large-scale computing systems, a crucial use of error correcting codes is to judiciously introduce <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">redundancy</i> to ensure <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">recoverability</i> from failure. To get the most out of every byte, practitioners and theorists have introduced the framework of <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">maximal recoverability</i> (MR) to study optimal error-correcting codes in various architectures. In this survey, we dive into the study of two families of MR codes: MR locally recoverable codes (LRCs) (also known as partial MDS codes) and grid codes (GCs). For both families of codes, we discuss the primary recoverability guarantees of each class of codes as well as what is known concerning optimal constructions of each class of codes. Along the way, we discuss many surprising connections between MR codes and broader questions in computer science and mathematics. For MR LRCs, the use of skew polynomial codes has unified many previous constructions. For MR GCs, the theory of higher order MDS codes shows that MR GCs can be used to construct optimal list-decodable codes. Furthermore, the optimally recoverable patterns of MR GCs have close ties to long-standing problems on the structural rigidity of graphs.
- Research Article
- 10.17507/jltr.1701.28
- Jan 1, 2026
- Journal of Language Teaching and Research
- Karima Almazroui
Decoding breakdowns are often misinterpreted as deficits rather than framed as opportunities for strategic growth. The challenge is acute in opaque orthographies such as Arabic, where diglossia, morphological density, and diacritic omission impose heavy cognitive load. This study introduces The Metacognitive Reading Keys, a low-tech scaffold that externalizes expert-reader strategies into six student-owned prompts (Sound It Out, Look at the Picture, Imagine It, Break It Apart, Reread the Sentence, Skip and Return Later). Unlike scripted phonics programs, the Keys integrate phonological decoding, morphological parsing, imagery, contextual repair, and emotional regulation into a portable, recursive system privileging learner agency. Grounded in Cognitive Load Theory, Dual Coding Theory, and the Cognitive-Strategic Literacy Development framework, the Keys were refined through a five-year design-based research study in U.S. and UAE classrooms (N = 326, Grades 1–3). Mixed-methods analyses showed significant gains in decoding accuracy (+14.3%, p < .001, d = 0.65–0.88), with over 70% of students applying strategies independently within four weeks. Teachers reported less passivity, more self-correction, and greater engagement, particularly in multilingual, low-resource contexts. Findings demonstrate that decoding resilience is teachable, and that culturally responsive, low-cost scaffolds can bridge theory and practice while reframing literacy equity as cognitive dignity.
- Research Article
- 10.1007/s10936-025-10190-0
- Jan 1, 2026
- Journal of Psycholinguistic Research
- Wilma Bucci + 1 more
This issue is based on a conference that was held in July 2023 and that focused on new advances in the theory and measures of the Referential Process (RP). The concept of the RP was developed in the context of the Multiple Code Theory (MCT) and concerns the complex process by which people are able to connect all manner of experiences, including bodily and emotional experience, to words, in both spoken and written language. A previous issue of this journal, published in 2021, outlined the theory of the referential process, and included empirical and clinical applications. The research studies featured measures of language style based on the Discourse Attributes Analysis Program (DAAP), a computerized text analysis system that was developed by Bernard Maskit in the context of MCT. The 2023 conference covered new developments in the DAAP research program and also featured Dr. Maskit’s presentation of the new TDAAP (Time-based DAAP) which analyzes spoken language on the basis of time flow rather than word count. This new program takes a major step towards examining the role of sensory and bodily experience in the verbal communication of experience. The conference was sponsored by the Pacella Research Center of the New York Psychoanalytic Society and Institute. It included both in-person and online participants across the U.S. and in Canada, Israel, and Italy, and represented the combined efforts of researchers and clinicians. At the conclusion of the conference, the editor of this journal, Dr. Rafael Javier, invited submission of the papers presented at the meetings for a special issue of the journal. In this introduction, we will briefly outline the theory that provides the conceptual framework for our projects. The next section will provide a brief introduction to the papers that are included in this issue.
- Research Article
- 10.53469/jrve.2025.7(12).06
- Dec 30, 2025
- Journal of Research in Vocational Education
- Mai Hathal Al-Zuriqat + 1 more
This study investigates the intersection of algebraic geometry and coding theory, specifically focusing on the application of algebraic curves in the advancement of error-correcting codes. Algebraic curves, as mathematical objects, offer profound implications in the design and analysis of error-correcting codes, providing robust solutions to the challengesof data transmission and storage. This paper delves into the theoretical foundations ofalgebraic curves, their role in constructing powerful error-correcting codes, and the practicalapplications of these codes in various technological domains.
- Research Article
- 10.59257/turkbilig.1553800
- Dec 30, 2025
- Türkbilig
- Mehmet Akkuş
This paper explores the intersection between Johanson’s Code Copying framework and key principles of Cognitive Linguistics, with a focus on language contact in multilingual contexts. Johanson’s theory of code copying provides a comprehensive model for understanding how languages influence one another, emphasizing the selective and global copies of linguistic elements between a model and basic language. Cognitive Linguistics, which posits that language is deeply rooted in general cognitive abilities, offers insights into the mental processes guiding these transfers. This study highlights how both frameworks view language as a dynamic system shaped by cognitive factors like perception, memory, and usage. Johanson’s concept of selective copying aligns with Cognitive Linguistics' emphasis on meaning and conceptual structures, where speakers consciously or subconsciously adopt features that are cognitively compatible with their own language. The paper also explores the role of frequency, demonstrating how frequent exposure to linguistic elements in contact situations increases their likelihood of adoption, a principle central to both Johanson’s frequential copying and the usage-based approach in Cognitive Linguistics. By examining these intersections, the paper offers a deeper understanding of how cognitive processes underlie language change and adaptation in multilingual settings. Johanson’s work, particularly in relation to Turkic languages, serves as a foundational resource for analyzing linguistic convergence, while Cognitive Linguistics provides a broader cognitive perspective, bridging the gap between language contact phenomena and cognitive science.
- Research Article
- 10.62843/jrsr/2026.5a158
- Dec 30, 2025
- Journal of Regional Studies Review
- Hifsa Naveed + 1 more
: Visuals and infographics provide pictorial representation of words used in the dictionaries. This act makes it easy for learners to learn words in best effective way. The aim of the study is to explore what type of visuals and infographics are used in English dictionaries. A qualitative research approach was employed. For sampling, Oxford English Picture Dictionary was selected through purposive sampling method as this dictionary contains many pictures. Dual coding theory was employed as a theoretical framework to explain how images and words work together to give better understanding of words. Thematic analysis was done to analyze data. The findings revealed that images make meanings clear and % of pictures used for nouns in Oxford English Picture Dictionary is comparatively high than pictures used for verbs and adjectives. The study suggests that lexicographers should focus on including more visuals for verbs and adjectives as adjectives are harder to understand.