Abstract

BackgroundA salient purpose for studying gene regulatory networks is to derive intervention strategies, the goals being to identify potential drug targets and design gene-based therapeutic intervention. Optimal stochastic control based on the transition probability matrix of the underlying Markov chain has been studied extensively for probabilistic Boolean networks. Optimization is based on minimization of a cost function and a key goal of control is to reduce the steady-state probability mass of undesirable network states. Owing to computational complexity, it is difficult to apply optimal control for large networks.ResultsIn this paper, we propose three new greedy stationary control policies by directly investigating the effects on the network long-run behavior. Similar to the recently proposed mean-first-passage-time (MFPT) control policy, these policies do not depend on minimization of a cost function and avoid the computational burden of dynamic programming. They can be used to design stationary control policies that avoid the need for a user-defined cost function because they are based directly on long-run network behavior; they can be used as an alternative to dynamic programming algorithms when the latter are computationally prohibitive; and they can be used to predict the best control gene with reduced computational complexity, even when one is employing dynamic programming to derive the final control policy. We compare the performance of these three greedy control policies and the MFPT policy using randomly generated probabilistic Boolean networks and give a preliminary example for intervening in a mammalian cell cycle network.ConclusionThe newly proposed control policies have better performance in general than the MFPT policy and, as indicated by the results on the mammalian cell cycle network, they can potentially serve as future gene therapeutic intervention strategies.

Highlights

  • A salient purpose for studying gene regulatory networks is to derive intervention strategies, the goals being to identify potential drug targets and design gene-based therapeutic intervention

  • Boolean networks (BNs), and more generally, probabilistic Boolean networks (PBNs) [1,2], have been used for finding beneficial interventions in gene regulatory networks through the study of network dynamics. Upon describing these dynamics via Markov chains, optimal stochastic control policies can be determined via dynamic programming [3,4,5] to change the long-run dynamics, which are characterized by the steady-state distribution (SSD) of the network (Markov chain), the purpose being to reduce the risk of entering aberrant states and thereby alter the extant cell behavior

  • We focus on intervention in binary PBNs in this paper but these results directly extend to more general PBNs having any discrete range of values since the underlying models are always finite Markov chains

Read more

Summary

Introduction

A salient purpose for studying gene regulatory networks is to derive intervention strategies, the goals being to identify potential drug targets and design gene-based therapeutic intervention. Boolean networks (BNs), and more generally, probabilistic Boolean networks (PBNs) [1,2], have been used for finding beneficial interventions in gene regulatory networks through the study of network dynamics. Upon describing these dynamics via Markov chains, optimal stochastic control policies can be determined via dynamic programming [3,4,5] to change the long-run dynamics, which are characterized by the steady-state distribution (SSD) of the network (Markov chain), the purpose being to reduce the risk of entering aberrant states and thereby alter the extant cell behavior. Our purpose is to reduce the mass of the steady-state distribution corresponding to undesirable states and increase the mass corresponding to desirable states, and to do this directly without the mediating factor of a cost function

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.