Abstract

Gene regulatory networks can be successfully modeled as Boolean networks. A much discussed hypothesis says that such model networks reproduce empirical findings the best if they are tuned to operate at criticality, i.e. at the borderline between their ordered and disordered phases. Critical networks have been argued to lead to a number of functional advantages such as maximal dynamical range, maximal sensitivity to environmental changes, as well as to an excellent tradeoff between stability and flexibility. Here, we study the effect of noise within the context of Boolean networks trained to learn complex tasks under supervision. We verify that quasi-critical networks are the ones learning in the fastest possible way –even for asynchronous updating rules– and that the larger the task complexity the smaller the distance to criticality. On the other hand, when additional sources of intrinsic noise in the network states and/or in its wiring pattern are introduced, the optimally performing networks become clearly subcritical. These results suggest that in order to compensate for inherent stochasticity, regulatory and other type of biological networks might become subcritical rather than being critical, all the most if the task to be performed has limited complexity.

Highlights

  • Robust to perturbations, while in the disordered or chaotic phase perturbations rapidly propagate all through the network hindering the existence of truly stable states

  • Even in the absence of explicit noise sources, the dynamics based on asynchronous updating –which is the one we adopt here– has a stochastic component, which could be more adequate to represent real genetic networks than synchronously updated Random Boolean networks (RBNs) as it avoids spurious effects associated with perfectly synchronous updating[43]

  • B is computed in the ensemble of networks that have learned –and not in the Erdős-Rényi ensemble– and Hamming distance measurements are restricted to the network core

Read more

Summary

Introduction

Robust to perturbations, while in the disordered or chaotic phase (typically obtained for densely connected networks) perturbations rapidly propagate all through the network hindering the existence of truly stable states. Goudarzi et al.[31] considered an ensemble of RBN’s able to experience “mutations” in their topological structure and employed a genetic algorithm to select for those able to perform a given computational task (see Fig. 1); i.e. networks which have learnt have a larger fitness than those that have not. Under these conditions the ensemble converges to a state in which all networks operate close to criticality. Our approach differs from the previous one in three main aspects: (i) we consider asynchronous updating[42,43,44] rather than the usual deterministic one, introducing the effect of stochasticity in the updating timings, (ii) both the structure and the dynamics of the networks are subjected to noise (be it intrinsic or external), and (iii) we do not consider an evolutionary algorithm to search for the best possible network connectivity, but rather we work in a constant-connectivity ensemble and explore how the network performance depends on the network connectivity, i.e. on the network dynamical state

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call