Abstract

BackgroundOne of the most important goals of the mathematical modeling of gene regulatory networks is to alter their behavior toward desirable phenotypes. Therapeutic techniques are derived for intervention in terms of stationary control policies. In large networks, it becomes computationally burdensome to derive an optimal control policy. To overcome this problem, greedy intervention approaches based on the concept of the Mean First Passage Time or the steady-state probability mass of the network states were previously proposed. Another possible approach is to use reduction mappings to compress the network and develop control policies on its reduced version. However, such mappings lead to loss of information and require an induction step when designing the control policy for the original network.ResultsIn this paper, we propose a novel solution, CoD-CP, for designing intervention policies for large Boolean networks. The new method utilizes the Coefficient of Determination (CoD) and the Steady-State Distribution (SSD) of the model. The main advantage of CoD-CP in comparison with the previously proposed methods is that it does not require any compression of the original model, and thus can be directly designed on large networks. The simulation studies on small synthetic networks shows that CoD-CP performs comparable to previously proposed greedy policies that were induced from the compressed versions of the networks. Furthermore, on a large 17-gene gastrointestinal cancer network, CoD-CP outperforms other two available greedy techniques, which is precisely the kind of case for which CoD-CP has been developed. Finally, our experiments show that CoD-CP is robust with respect to the attractor structure of the model.ConclusionsThe newly proposed CoD-CP provides an attractive alternative for intervening large networks where other available greedy methods require size reduction on the network and an extra induction step before designing a control policy.

Highlights

  • A key purpose of modeling gene regulation via gene regulatory networks (GRNs) is to derive strategies to shift long-run cell behavior towards desirable phenotypes

  • To apply MFPT control policy (MFPT-CP) and Steady-State Distribution (SSD)-CP, we reduce the network via the gene reduction method introduced in [8] and delete genes consecutively until only 10 genes are left in the network

  • In this paper we propose a new algorithm, coefficient of determination (CoD)-CP, for designing a greedy stationary control policy that beneficially alters the dynamics of large gene regulatory networks

Read more

Summary

Introduction

A key purpose of modeling gene regulation via gene regulatory networks (GRNs) is to derive strategies to shift long-run cell behavior towards desirable phenotypes. Assuming random gene perturbation in a PBN, the associated Markov chain is ergodic, and it possesses a steady-state distribution (SSD), and (from a theoretical standpoint) one can always change the long-run behavior using an optimal control policy derived via dynamic programming [2,3]. It becomes computationally burdensome to derive an optimal control policy To overcome this problem, greedy intervention approaches based on the concept of the Mean First Passage Time or the steady-state probability mass of the network states were previously proposed. Greedy intervention approaches based on the concept of the Mean First Passage Time or the steady-state probability mass of the network states were previously proposed Another possible approach is to use reduction mappings to compress the network and develop control policies on its reduced version. The underlying model of a BNp is a finite Markov chain and its dynamics are completely described by its 2n × 2n state transition matrix, P

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call