Abstract

About 40 years ago, seminal work by S. Kauffman (1969) and R. Thomas (1973) paved the way to the establishment of a coarse-grained, “logical” modelling of gene regulatory networks. This gave rise to an increasingly active field of research, which ranges from theoretical studies to models of networks controlling a variety of cellular processes (Bornholdt 2008; Glass and Siegelmann 2010). Briefly, in these models, genes (or regulatory components) are assigned discrete values that account for their functional levels of expression (or activity). A regulatory function defines the evolution of the gene level, depending on the levels of its regulators. This abstracted representation of molecular mechanisms is very convenient for handling large networks for which precise kinetic data are lacking. Within the logical framework, one can distinguish several approaches that differ in the way of defining a model and its behaviour. In random Boolean networks, first introduced by S. Kauffman, the components are randomly assigned a set of regulators and a regulatory function that drives their evolution, depending on these regulators (Kauffman 1969). In threshold Boolean networks, the function is derived from a given regulatory structure as a sum of the input signals, possibly considering a threshold (Bornholdt 2008; Li et al. 2004). In the generalised logical approach introduced by R. Thomas, the logical functions are general, but are constrained by the regulatory graph. Whenever necessary or useful, the formalism supports multi-valued variables

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call