• All Solutions All Solutions Caret
    • Editage

      One platform for all researcher needs

    • Paperpal

      AI-powered academic writing assistant

    • R Discovery

      Your #1 AI companion for literature search

    • Mind the Graph

      AI tool for graphics, illustrations, and artwork

    • Journal finder

      AI-powered journal recommender

    Unlock unlimited use of all AI tools with the Editage Plus membership.

    Explore Editage Plus
  • Support All Solutions Support
    discovery@researcher.life
Discovery Logo
Paper
Search Paper
Cancel
Ask R Discovery Chat PDF
Explore

Feature

  • menu top paper My Feed
  • library Library
  • translate papers linkAsk R Discovery
  • chat pdf header iconChat PDF
  • audio papers link Audio Papers
  • translate papers link Paper Translation
  • chrome extension Chrome Extension

Content Type

  • preprints Preprints
  • conference papers Conference Papers
  • journal articles Journal Articles

More

  • resources areas Research Areas
  • topics Topics
  • resources Resources

Discrete-time Markov Chain Research Articles

  • Share Topic
  • Share on Facebook
  • Share on Twitter
  • Share on Mail
  • Share on SimilarCopy to clipboard
Follow Topic R Discovery
By following a topic, you will receive articles in your feed and get email alerts on round-ups.
Overview
1580 Articles

Published in last 50 years

Related Topics

  • Continuous-time Markov Chain
  • Continuous-time Markov Chain
  • Continuous-time Markov
  • Continuous-time Markov

Articles published on Discrete-time Markov Chain

Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
1594 Search results
Sort by
Recency
A Markov-based Optimal Maintenance Policy for Production Process

This article considers a machine maintenance problem. When the machine fails after a stochastic period, reducing its capacity to a proportion of the nominal level. In this degraded capacity state, three maintenance and repair policy include, continue at 50% capacity, imperfect maintenance and increase the capacity of the machine to 80% or perfect replacement and increase the capacity of the machine to the initial stateare available for evaluation. By modeling the system as a discrete-time Markov chain and analyzing the probability transition matrix between the system states, the costs associated in each state can be evaluated. The objective function representing the average cost per unit time of production is calculated to determine the optimal maintenance policy.

Read full abstract
  • Journal IconJournal of Reliability and Statistical Studies
  • Publication Date IconMay 19, 2025
  • Author Icon Mohammad Hossein Kargar Shouroki + 1
Cite IconCite
Chat PDF IconChat PDF
Save

Performance analysis of the shared memory system in stochastic process algebra dtsdPBC

Discrete time stochastic and deterministic Petri box calculus (dtsdPBC) is a parallel process algebra with stochastic and deterministic delays. To evaluate performance in dtsdPBC, semi-Markov chains (SMCs) and (reduced) discrete time Markov chains (DTMCs/RDTMCs) are analyzed. Stochastic bisimulation equivalence is used for quotienting the transition systems, SMCs and DTMCs/RDTMCs of the process expressions while preserving stationary behaviour and residence time. Our example of generalized shared memory system with maintenance demonstrates modeling, performance analysis and reduction by quotienting. The generalized system takes the probabilities and weights from the standard system's specification as variables, adjusted for performance optimization.

Read full abstract
  • Journal IconInternational Journal of Parallel, Emergent and Distributed Systems
  • Publication Date IconApr 22, 2025
  • Author Icon I V Tarasyuk
Cite IconCite
Chat PDF IconChat PDF
Save

Computing the Matrix G of Multi-Dimensional Markov Chains of M/G/1 Type

We consider Md-M/G/1 processes, which are irreducible discrete-time Markov chains consisting of two components. The first component is a nonnegative integer vector, while the second component indicates the state (or phase) of the external environment. The level of a state is defined by the minimum value in its first component. The matrix G of the process represents the conditional probabilities that, starting from a given state of a certain level, the Markov chain will first reach a lower level in a specific state. This study aims to develop an effective algorithm for computing matrices G for Md-M/G/1 processes.

Read full abstract
  • Journal IconMathematics
  • Publication Date IconApr 8, 2025
  • Author Icon Valeriy Naumov + 1
Open Access Icon Open Access
Cite IconCite
Chat PDF IconChat PDF
Save

Critical Illness, Major Surgery, and Other Hospitalizations and Active and Disabled Life Expectancy

Estimates of active and disabled life expectancy, defined as the projected number of remaining years without and with disability in essential activities of daily living, are commonly used by policymakers to forecast the functional well-being of older persons. To determine how estimates of active and disabled life expectancy differ based on exposure to intervening illnesses and injuries (or events). This prospective cohort study was conducted in south-central Connecticut from March 1998 to December 2021 among 754 community-living persons aged 70 years or older who were not disabled. Data were analyzed from January 25 to September 18, 2024. Exposure to intervening events, which included critical illness, major elective and nonelective surgical procedures, and hospitalization for other reasons, was assessed each month. Disability in 4 essential activities of daily living (bathing, dressing, walking, and transferring) was ascertained each month. Active and disabled life expectancy were estimated using multistate life tables under a discrete-time Markov process assumption. The study included 754 community-living older persons who were not disabled (mean [SD] age, 78.4 [5.3] years; 487 female [64.6%]; 67 Black [8.9%], 4 Hispanic [0.5%], 682 non-Hispanic White [90.5%], and 1 other race [0.1%]). Within 5-year age increments from 70 to 90 years, active life expectancy decreased monotonically as the number of admissions for critical illness and other hospitalization increased. For example, at age 70 years, sex-adjusted active life expectancy decreased from 14.6 years (95% CI, 13.9-15.2 years) in the absence of a critical illness admission to 11.3 years (95% CI, 10.3-12.2 years), 8.1 years (95% CI, 6.3-9.9 years), and 4.0 years (95% CI, 2.6-5.7 years) in the setting of 1, 2, or 3 or more critical illness admissions, respectively. Corresponding values for other hospitalization were 19.4 years (95% CI, 18.0-20.8 years), 13.5 years (95% CI, 12.2-14.7 years), 10.0 years (95% CI, 8.9-11.2 years), and 7.0 years (95% CI, 6.1-7.9 years), respectively. Consistent monotonic reductions were observed for sex-adjusted estimates in active life expectancy for nonelective but not elective surgical procedures as the number of admissions increased; for example, at age 70 years, estimates of active life expectancy were 13.9 years (95% CI, 13.3-14.5 years), 11.7 years (95% CI, 10.5-12.8 years), and 9.2 years (95% CI, 7.4-11.0 years) for 0, 1, and 2 or more nonelective surgical admissions, respectively; corresponding values were 13.4 years (95% CI, 12.8-3-14.1 years), 14.6 years (95% CI, 13.5-15.5 years), and 12.6 years (95% CI, 11.5-13.8 years) for elective surgical admissions. Sex-adjusted disabled life expectancy decreased as the number of admissions increased for critical illness and other hospitalization but not for nonelective or elective surgical procedures; for example, at age 70 years, disabled life expectancy decreased from 4.4 years (95% CI, 3.5-5.8 years) in the absence of other hospitalization to 3.4 years (95% CI, 2.8-4.1 years), 3.4 years (95% CI, 2.7-4.2 years), and 2.3 years (95% CI, 1.9-2.8 years) in the setting of 1, 2, or 3 or more other hospitalizations, respectively. This study found that active life expectancy among community-living older persons who were not disabled was considerably diminished in the setting of serious intervening illnesses and injuries. These findings suggest that prevention and more aggressive management of these events, together with restorative interventions, may be associated with improved functional well-being among older persons.

Read full abstract
  • Journal IconJAMA Network Open
  • Publication Date IconApr 3, 2025
  • Author Icon Thomas M Gill + 6
Cite IconCite
Chat PDF IconChat PDF
Save

A framework for analyzing the periodically-observed time-homogeneous Poisson process

When counting process data are collected from real-world systems, the arrival of each event is often reported in periodic time units (e.g., hour, day, week, month) and the precise arrival time is lost. This periodic reporting introduces discretization error into arrival time data, fundamentally changing the resulting interarrival distribution and inhibiting comparisons to continuous-time stochastic processes (e.g., Poisson process). This article formulates the periodically-observed time-homogeneous Poisson process (PTPP) to account for discretization due to periodic observation when the underlying system is a time-homogeneous Poisson process. In contrast with the analogous Poisson process, the PTPP is not a renewal process; however, its arrivals can be modeled by an infinite-state discrete-time Markov chain with two state variables: the recorded interarrival time and order of the event within the current observation period. The marginal limiting distribution for the first variable (i.e., the limiting interarrival distribution) is derived along with its cumulative distribution, moment generating function, first two moments, and variance. This article shows, through a simulation-based experiment, that there exist a range of discretization-levels for which neither the interarrival nor counting distribution can effectively identify a periodically-observed Poisson process through goodness-of-fit testing; the PTPP model bridges this gap.

Read full abstract
  • Journal IconJournal of the Operational Research Society
  • Publication Date IconMar 13, 2025
  • Author Icon Zachary T Hornberger + 2
Cite IconCite
Chat PDF IconChat PDF
Save

Embeddings Between State and Action Based Probabilistic Logics

This article defines embeddings between state-based and action-based probabilistic logics which can be used to support probabilistic model checking. First, we slightly modify the model embeddings proposed in the literature to allow invisible computation steps and the preservation of forward and backward bisimulation relations. Next, we propose the syntax and semantics of an action-based Probabilistic Computation Tree Logic (APCTL) and an action-based PCTL* (APCTL*) interpreted over action-labeled discrete-time Markov chains (ADTMCs). We show that both these logics are strictly more expressive than the probabilistic variant of Hennessy–Milner logic (prHML). We define an embedding aldl which can be used to construct APCTL* formulae from PCTL* formulae and an embedding sldl from APCTL* formulae to PCTL* formulae. Similarly, we define the embeddings \(aldl^{\prime }\) and \(sldl^{\prime }\) from PCTL to APCTL and APCTL to PCTL, respectively. We also define the reward-based variant of APCTL (APRCTL) interpreted over action-based Markov Reward Models (AMRM), and accordingly modify the logical embeddings \(aldl^{\prime }\) and \(sldl^{\prime }\) which allows us to take into account the notion of rewards. Additionally, we also show that the idea of rewards can be used to reason about the bounded until operator in PCTL and APCTL. Finally, we prove that our logical embeddings combined with the model embeddings enable one to minimize, analyze, and verify probabilistic models in one domain using state-of-the-art tools and techniques developed for the other domain. In order to validate the efficacy of our theoretical framework, we apply it to two case studies using the probabilistic symbolic model checker (PRISM).

Read full abstract
  • Journal IconFormal Aspects of Computing
  • Publication Date IconMar 3, 2025
  • Author Icon Susmoy Das + 1
Cite IconCite
Chat PDF IconChat PDF
Save

A stochastic model for affect dynamics: methodological insights from heart rate variability in an illustrative case of Anorexia Nervosa.

Affect dynamics, or variations in emotional experiences over time, are linked to psychological health and well-being, with moderate emotional variations indicating good psychophysical health. Given the impact of emotional state on cardiac variability, our objective was to develop a quantitative method to measure affect dynamics for better understanding emotion temporal management in Anorexia Nervosa (AN). The study proposed an experimental and methodological approach to evaluate physiological affect dynamics in clinical settings. It tested affective transitions and temporal changes using emotional images from the International Affective Picture System (IAPS), examining physiological characteristics of a patient with AN. The methodology involved calculating a heart rate variability index, e.g., RMSSD, and using it in a Discrete Time and Discrete Space Markov chain to define, quantify, and predict emotional fluctuations over time. The patient with Anorexia Nervosa showed a high likelihood of transitioning from positive to negative emotional states, particularly at lower arousal levels. The steady state matrix indicated a tendency to remain in highly activated pleasant states, reflecting difficulties in maintaining emotional balance. Employing Markov chains provided a quantitative and insightful approach for examining affect dynamics in a patient with AN. This methodology accurately measures emotional transitions and provides a clear and interpretable framework for clinicians and patients. By leveraging Markovian indexes, mental health professionals may gain a comprehensive understanding of emotional fluctuations' patterns. Moreover, graphical representations of emotional transitions may enhance the clinician-patient dialogue, facilitating a clearer emotional and physiological profile for the implementation of personalized treatment procedures.

Read full abstract
  • Journal IconFrontiers in psychiatry
  • Publication Date IconFeb 25, 2025
  • Author Icon Francesca Borghesi + 11
Cite IconCite
Chat PDF IconChat PDF
Save

Output Feedback Optimal Control for Discrete-Time Singular Systems Driven by Stochastic Disturbances and Markov Chains

This paper delves into the exploration of the indefinite linear quadratic optimal control (LQOC) problem for discrete-time stochastic singular systems driven by discrete-time Markov chains. Initially, the conversion of the indefinite LQOC problem mentioned above for stochastic singular systems into an equivalent problem of normal stochastic systems is executed through a sequence of transformations. Following this, the paper furnishes sufficient and necessary conditions for resolving the transformed LQOC problem with indefinite matrix parameters, alongside optimal control strategies ensuring system regularity and causality, thereby establishing the solvability of the optimal controller. Additionally, conditions are derived to verify the definiteness of the transformed LQOC problem and the uniqueness of solutions for the generalized Markov jumping algebraic Riccati equation (GMJARE). The study attains optimal controls and nonnegative cost values, guaranteeing system admissibility. The results of the finite horizon are extended to the infinite horizon. Furthermore, it introduces the design of an output feedback controller using the LMI method. Finally, a demonstrative example demonstrates the validity of the main findings.

Read full abstract
  • Journal IconMathematics
  • Publication Date IconFeb 14, 2025
  • Author Icon Jing Xie + 3
Open Access Icon Open Access
Cite IconCite
Chat PDF IconChat PDF
Save

Abstract WP90: Cost of Stroke Treatment: A Comparative Analysis of Mobile Stroke and Standard Treatment

Introduction: Over the past decade, Mobile Stroke Treatment Units (MSTU) have enhanced the quality of stroke care in the United States by bringing the hospital to the patient. While MSTUs improve stroke patient outcomes compared to standard hospital care, there are still limited units operating. The implementation of a MSTU requires considerable initial and long-term investment limiting their widespread programmatic formation. We evaluated a MSTU program in Florida between August 2023 and April 2024 for comparative patient associated out of pocket costs with MSTU and standard stroke care. Methods: A discrete time Markov Chain Monte Carlo (MCMC) model was used to estimate incremental cost-savings associated with MSTU treatment compared to standard hospital care. The Markov model captured treatment costs for the care of patients at two functional levels as defined by the modified Rankin Scale (mRS). The potential cost-savings was determined by comparing the estimated costs incurred by the MSTU cohort to a counterfactual scenario of standard care of Emergency Medical Services (EMS) transport to the Emergency Department (ED). Since the model focused on the cost of patient care, costs included only billed ED, inpatient, and outpatient hospital care and services provided in the baseline year and then estimated cost of care over the next four years. All values represent 2024 dollars ($) and a 3% discount rate was applied to years two through four. Results: The MSTU treated 59 acute stroke patients with an average age of 71.86 (SD=13.78). Overall, 76% (N=45) were diagnosed with ischemic stroke, 9% with intracerebral hemorrhage (ICH), and 15% with transient ischemic attack (TIA). At discharge, 54% were independent and 46% dependent. In Year 1 (baseline), out of pocket cost differential between MSTU patients and the standard care was estimated to be $5,306 and $6,485 for the independent and dependent patients respectively. Projected future cost differentials in Years 2 to 4 were $4,571, $3,845, and $2,817 for the independent functioning cohort and $5,586, $4,700, and $4,188 for the dependent functioning cohort. Conclusion: These results suggest that the out of pocket cost for MSTU patients was significantly lower than standard care both at baseline and over the first four years post-stroke, making MSTU acute stroke management a better economic system of care for time metrics, long term patient outcomes, and cost effectiveness.

Read full abstract
  • Journal IconStroke
  • Publication Date IconFeb 1, 2025
  • Author Icon Nicolle Davis + 3
Cite IconCite
Chat PDF IconChat PDF
Save

Nonlinear Monte Carlo Methods with Polynomial Runtime for Bellman Equations of Discrete Time High-Dimensional Stochastic Optimal Control Problems

Discrete time stochastic optimal control problems and Markov decision processes (MDPs), respectively, serve as fundamental models for problems that involve sequential decision making under uncertainty and as such constitute the theoretical foundation of reinforcement learning. In this article we study the numerical approximation of MDPs with infinite time horizon, finite control set, and general state spaces. Our set-up in particular covers infinite-horizon optimal stopping problems of discrete time Markov processes. A key tool to solve MDPs are Bellman equations which characterize the value functions of the MDPs and determine the optimal control strategies. By combining ideas from the full-history recursive multilevel Picard approximation method, which was recently introduced to solve certain nonlinear partial differential equations, and ideas from Q-learning we introduce a class of suitable nonlinear Monte Carlo methods and prove that the proposed methods do not suffer from the curse of dimensionality in the numerical approximation of the solutions of Bellman equations and the associated discrete time stochastic optimal control problems.

Read full abstract
  • Journal IconApplied Mathematics & Optimization
  • Publication Date IconFeb 1, 2025
  • Author Icon Christian Beck + 3
Open Access Icon Open Access
Cite IconCite
Chat PDF IconChat PDF
Save

Multi-Dimensional Markov Chains of M/G/1 Type

We consider an irreducible discrete-time Markov process with states represented as (k, i) where k is an M-dimensional vector with non-negative integer entries, and i indicates the state (phase) of the external environment. The number n of phases may be either finite or infinite. One-step transitions of the process from a state (k, i) are limited to states (n, j) such that n ≥ k−1, where 1 represents the vector of all 1s. We assume that for a vector k ≥ 1, the one-step transition probability from a state (k, i) to a state (n, j) may depend on i, j, and n − k, but not on the specific values of k and n. This process can be classified as a Markov chain of M/G/1 type, where the minimum entry of the vector n defines the level of a state (n, j). It is shown that the first passage distribution matrix of such a process, also known as the matrix G, can be expressed through a family of nonnegative square matrices of order n, which is a solution to a system of nonlinear matrix equations.

Read full abstract
  • Journal IconMathematics
  • Publication Date IconJan 9, 2025
  • Author Icon Valeriy Naumov + 1
Open Access Icon Open Access
Cite IconCite
Chat PDF IconChat PDF
Save

A variance reduction technique for Monte Carlo simulations of electrons and ions in electric and magnetic fields

A computationally efficient variance reduction technique for Monte Carlo simulations of electrons and ions in weakly ionized gases is proposed. The transport of charged particles under electric and magnetic fields is expressed as a discrete-time Markov process in a grid. This results in a significant reduction of the computational time and statistical fluctuations of the computed velocity distribution functions (VDFs). The results are presented for a model gas and different values of the Hall parameter. The method is then applied to simulations of electrons in D2 and H+ ions in H2 using state-of-the-art cross sections and different values of externally applied electric and magnetic fields. It is shown that this approach allows one to study the combined effects of electric and magnetic fields on charged particles transport in a notably simple way, without employing a spherical harmonic expansion of the VDF.

Read full abstract
  • Journal IconPhysics of Plasmas
  • Publication Date IconJan 1, 2025
  • Author Icon Luca Vialetto + 2
Open Access Icon Open Access
Cite IconCite
Chat PDF IconChat PDF
Save

MODELLING STUDENTS’ ACADEMIC PERFORMANCE AND PROGRESS: A DISCRETE-TIME MARKOV CHAIN APPROACH

Predicting students' performance has become increasingly challenging due to the large volume of data in educational databases. Academic achievement reflects learning effectiveness and serves as a key indicator of teaching quality, institutional standards, and overall student development. Higher education systems operate hierarchically, with students progressing through academic levels annually or exiting as graduates or dropouts. Understanding and evaluating student progression is vital amidst evolving educational dynamics. This study models students’ academic performance and progression using a discrete-time Markov chain approach to predict future outcomes. Data on students’ enrollment and performance for five(5) sessions were collected from the Department of Statistics, Federal University of Technology, Minna. The Markov chain model was constructed for different academic levels and their absorbing states. Key metrics, including expected time spent at each level, absorption probabilities, and graduation or withdrawal likelihoods, were estimated. The findings show that 100-level entrants have an 80.5% chance of graduating and a 19.5% risk of withdrawal, with graduation likelihoods increasing with progression—reaching 99.4% at 500-level. The forecasts from the constructed Markov chain models showed that 100-level entrants are 99.4% likely to graduate after five sessions, 200-level entrants after three sessions, and 300-level entrants after one session. The study shows that while attrition rates are higher in the early stages, students advancing beyond the 200-level exhibit strong prospects for completion. These findings underscore the university’s effective programs and support systems, particularly in retaining and advancing students beyond the critical early stages.

Read full abstract
  • Journal IconFUDMA JOURNAL OF SCIENCES
  • Publication Date IconDec 31, 2024
  • Author Icon Balikis Oluwakemi Yekeen + 2
Cite IconCite
Chat PDF IconChat PDF
Save

Performance Analysis of Priority Medical Events in Healthcare IoT Networks Using 3‐Dimension Discrete Time Markov Chain

ABSTRACTA reliable medical service via the Healthcare Internet‐of‐Things (H‐IoT) network employs wireless communications to periodically deliver medical data from in‐body patients to the data center. However, the high number of patients in an ultra‐dense hospital requires the installation of numerous medical sensors, incurring congested network traffic. Data collisions and non‐priority sessions may occur in the corresponding traffic, dropping the overall performance of H‐IoT networks. Thus, it is crucial to rigorously observe the degradation of wireless performance to analyze proper settings and limitations for ultra‐dense H‐IoT. Accordingly, this paper models the operation of a healthcare IoT network using a three‐dimensional discrete‐time Markov chain (3D‐DTMC) to analyze the performance of H‐IoT networks, including expected throughput, expected discovery time consumption, and the probability of successful transmission quantitatively. In addition, this paper also analyzes the impact of different failure factor mechanisms, either cooperative failure or independent failure, on the overall performance of H‐IoT networks.

Read full abstract
  • Journal IconInternet Technology Letters
  • Publication Date IconDec 14, 2024
  • Author Icon Gilang Raka Rayuda Dewa
Cite IconCite
Chat PDF IconChat PDF
Save

A Stochastic Model for the Impact of Climate Change on Temperature and Precipitation

The variations from year to year of the monthly average temperatures are modelled as a discrete-time Markov chain. By computing the limiting probabilities of the Markov chain, we can see the impact of climate change on these temperatures. The same type of model is proposed for the variations of the monthly amounts of precipitation. An application to Jordan is presented.

Read full abstract
  • Journal IconWSEAS TRANSACTIONS ON ENVIRONMENT AND DEVELOPMENT
  • Publication Date IconNov 28, 2024
  • Author Icon Mario Lefebvre
Cite IconCite
Chat PDF IconChat PDF
Save

Joint Sampling and Transmission Policies for Minimizing Cost Under Age of Information Constraints.

In this work, we consider the problem of jointly minimizing the average cost of sampling and transmitting status updates by users over a wireless channel subject to average Age of Information (AoI) constraints. Errors in the transmission may occur and a policy has to decide if the users sample a new packet or attempt to retransmission the packet sampled previously. The cost consists of both sampling and transmission costs. The sampling of a new packet after a failure imposes an additional cost on the system. We formulate a stochastic optimization problem with the average cost in the objective under average AoI constraints. To solve this problem, we propose three scheduling policies: (a) a dynamic policy, which is centralized and requires full knowledge of the state of the system and (b) two stationary randomized policies that require no knowledge of the state of the system. We utilize tools from Lyapunov optimization theory and Discrete-Time Markov Chain (DTMC) to provide the dynamic policy and the randomized ones, respectively. Simulation results show the importance of providing the option to transmit an old packet in order to minimize the total average cost.

Read full abstract
  • Journal IconEntropy (Basel, Switzerland)
  • Publication Date IconNov 25, 2024
  • Author Icon Emmanouil Fountoulakis + 3
Open Access Icon Open Access
Cite IconCite
Chat PDF IconChat PDF
Save

Performance Analysis of Wireless Sensor Networks Using Damped Oscillation Functions for the Packet Transmission Probability

Wireless sensor networks are composed of many nodes distributed in a region of interest to monitor different environments and physical variables. In many cases, access to nodes is not easy or feasible. As such, the system lifetime is a primary design parameter to consider in the design of these networks. In this regard, for some applications, it is preferable to extend the system lifetime by actively reducing the number of packet transmissions and, thus, the number of reports. The system administrator can be aware of such reporting reduction to distinguish this final phase from a malfunction of the system or even an attack. Given this, we explore different mathematical functions that drastically reduce the number of packet transmissions when the residual energy in the system is low but still allow for an adequate number of transmissions. Indeed, in previous works, where the negative exponential distribution is used, the system reaches the point of zero transmissions extremely fast. Hence, we propose different dampening functions with different decreasing rates that present oscillations to allow for packet transmissions even at the end of the system lifetime. We compare the system performance under these mathematical functions, which, to the best of our knowledge, have never been used before, to find the most adequate transmission scheme for packet transmissions and system lifetime. We develop an analytical model based on a discrete-time Markov chain to show that a moderately decreasing function provides the best results. We also develop a discrete event simulator to validate the analytical results.

Read full abstract
  • Journal IconComputers
  • Publication Date IconNov 4, 2024
  • Author Icon Izlian Y Orea-Flores + 4
Open Access Icon Open Access
Cite IconCite
Chat PDF IconChat PDF
Save

Embedding and elimination for performance analysis in stochastic process algebra dtsdPBC

ABSTRACT dtsdPBC extends the well-known algebra of parallel processes, Petri box calculus (PBC), by incorporating discrete time stochastic and deterministic delays. To analyze performance in this extended calculus, the underlying semi-Markov chains, and the related (complete) and reduced discrete time Markov chains of the process expressions are built. The semi-Markov chains are extracted using the embedding method, which constructs the embedded discrete time Markov chains and calculates the sojourn time distributions in the states. The reductions of the discrete time Markov chains are obtained through the elimination method, which removes the vanishing states (those with zero sojourn times) and recalculates the transition probabilities among the tangible states (those with positive sojourn times). We prove that the reduced semi-Markov chain coincides with the reduced discrete time Markov chain, by demonstrating that an additional embedding into the reduced semi-Markov chain is needed for the reduced embedded discrete time Markov chain to match the embedded reduced discrete time Markov chain, and by comparing the respective sojourn times.

Read full abstract
  • Journal IconInternational Journal of Parallel, Emergent and Distributed Systems
  • Publication Date IconOct 24, 2024
  • Author Icon I V Tarasyuk
Cite IconCite
Chat PDF IconChat PDF
Save

Stability and stabilization of discrete-time linear compartmental switched systems via Markov chains

Stability and stabilization of discrete-time linear compartmental switched systems via Markov chains

Read full abstract
  • Journal IconAutomatica
  • Publication Date IconAug 21, 2024
  • Author Icon Zhitao Li + 2
Cite IconCite
Chat PDF IconChat PDF
Save

Markovian Maintenance Planning of Ship Propulsion System Accounting for CII and System Degradation

The study’s objective is to create a method to select the best course of maintenance action for each state of ship propulsion system degradation while considering both the present and future costs and associated carbon intensity indicator, CII, rates. The method considers the effects of wind and wave action when considering fouling and ageing. The ship resistance in calm, wave, and wind conditions has been defined using standard operating models, which have also been used to estimate the required engine power, service speed, fuel consumption, generated CO2, CII, and subsequent maintenance costs. The maintenance takes into consideration the effects of profit loss because of lost opportunities and efficiency over time. Any maintenance choice has total costs associated with it, including extra fuel, upkeep, and missed opportunities. Using a discrete-time Markov chain, the ship’s propulsion system maintenance schedule is optimized. A decision has been reached regarding the specific maintenance measures to be undertaken for each state of the Markov chain among various alternatives. The choice of optimal maintenance is related to a Markov decision process and is made by considering both the current and future costs. The developed method can forecast the propulsion system’s future states and any required maintenance activities.

Read full abstract
  • Journal IconEnergies
  • Publication Date IconAug 19, 2024
  • Author Icon Yordan Garbatov + 1
Cite IconCite
Chat PDF IconChat PDF
Save

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • .
  • .
  • .
  • 10
  • 1
  • 2
  • 3
  • 4
  • 5

Popular topics

  • Latest Artificial Intelligence papers
  • Latest Nursing papers
  • Latest Psychology Research papers
  • Latest Sociology Research papers
  • Latest Business Research papers
  • Latest Marketing Research papers
  • Latest Social Research papers
  • Latest Education Research papers
  • Latest Accounting Research papers
  • Latest Mental Health papers
  • Latest Economics papers
  • Latest Education Research papers
  • Latest Climate Change Research papers
  • Latest Mathematics Research papers

Most cited papers

  • Most cited Artificial Intelligence papers
  • Most cited Nursing papers
  • Most cited Psychology Research papers
  • Most cited Sociology Research papers
  • Most cited Business Research papers
  • Most cited Marketing Research papers
  • Most cited Social Research papers
  • Most cited Education Research papers
  • Most cited Accounting Research papers
  • Most cited Mental Health papers
  • Most cited Economics papers
  • Most cited Education Research papers
  • Most cited Climate Change Research papers
  • Most cited Mathematics Research papers

Latest papers from journals

  • Scientific Reports latest papers
  • PLOS ONE latest papers
  • Journal of Clinical Oncology latest papers
  • Nature Communications latest papers
  • BMC Geriatrics latest papers
  • Science of The Total Environment latest papers
  • Medical Physics latest papers
  • Cureus latest papers
  • Cancer Research latest papers
  • Chemosphere latest papers
  • International Journal of Advanced Research in Science latest papers
  • Communication and Technology latest papers

Latest papers from institutions

  • Latest research from French National Centre for Scientific Research
  • Latest research from Chinese Academy of Sciences
  • Latest research from Harvard University
  • Latest research from University of Toronto
  • Latest research from University of Michigan
  • Latest research from University College London
  • Latest research from Stanford University
  • Latest research from The University of Tokyo
  • Latest research from Johns Hopkins University
  • Latest research from University of Washington
  • Latest research from University of Oxford
  • Latest research from University of Cambridge

Popular Collections

  • Research on Reduced Inequalities
  • Research on No Poverty
  • Research on Gender Equality
  • Research on Peace Justice & Strong Institutions
  • Research on Affordable & Clean Energy
  • Research on Quality Education
  • Research on Clean Water & Sanitation
  • Research on COVID-19
  • Research on Monkeypox
  • Research on Medical Specialties
  • Research on Climate Justice
Discovery logo
FacebookTwitterLinkedinInstagram

Download the FREE App

  • Play store Link
  • App store Link
  • Scan QR code to download FREE App

    Scan to download FREE App

  • Google PlayApp Store
FacebookTwitterTwitterInstagram
  • Universities & Institutions
  • Publishers
  • R Discovery PrimeNew
  • Ask R Discovery
  • Blog
  • Accessibility
  • Topics
  • Journals
  • Open Access Papers
  • Year-wise Publications
  • Recently published papers
  • Pre prints
  • Questions
  • FAQs
  • Contact us
Lead the way for us

Your insights are needed to transform us into a better research content provider for researchers.

Share your feedback here.

FacebookTwitterLinkedinInstagram
Cactus Communications logo

Copyright 2025 Cactus Communications. All rights reserved.

Privacy PolicyCookies PolicyTerms of UseCareers