The expansive research field of computational intelligence combines various nature-inspired computational methodologies and draws on rigorous quantitative approaches across computer science, mathematics, physics, and life sciences. Some of its research topics, such as artificial neural networks, fuzzy logic, evolutionary computation, and swarm intelligence, are traditional to computational intelligence. Other areas have established their relevance to the field fairly recently: embodied intelligence (Pfeifer and Bongard, 2006; Der, 2014), information theory of cognitive systems (Lungarella and Sporns, 2006; Polani et al., 2007; Ay et al., 2008), guided self-organization (Prokopenko, 2009; Der and Martius, 2012), and evolutionary game theory (Vincent and Brown, 2005). The intelligence phenomenon continues to fascinate scientists and engineers, remaining an elusive moving target. Following numerous past observations (e.g., Hofstadter, 1985, p. 585), it can be pointed out that several attempts to construct “artificial intelligence” have turned to designing programs with discriminative power. These programs would allow computers to discern between meaningful and meaningless in similar ways to how humans perform this task. Interestingly, as noted by de Looze (2006) among others, such discrimination is based on etymology of “intellect” derived from Latin “intellego” (inter-lego): to choose between, or to perceive/read (a core message) between (alternatives). In terms of computational intelligence, the ability to read between the lines, extracting some new essence, corresponds to mechanisms capable of generating computational novelty and choice, coupled with active perception, learning, prediction, and post-diction. When a robot demonstrates a stable control in presence of a priori unknown environmental perturbations, it exhibits intelligence. When a software agent generates and learns new behaviors in a self-organizing rather than a predefined way, it seems to be curiosity-driven. When an algorithm rapidly solves a hard computational problem, by efficiently exploring its search-space, it appears intelligent. In short, innovation and creativity shown within a rich space shaped by diverse, “entropic” forces, appeal to us as cognitive traits (Wissner-Gross and Freer, 2013). Can this intuition be formalized within rigorous and generic computational frameworks? What are the crucial obstacles on such a path? Intuitively, intelligent behavior is expected to be predictable and stable, but sensitive to change. Attempts to formalize this duality date back at least to cybernetics. For example, Ashby’s well-known Law of Requisite Variety states that an active controller requires as much variety (number of states) as that of the controlled system to be stable (Ashby, 1956). In order to explain the generation of behavior and learning in machines and living systems, Ashby also linked the concepts of ultrastability and homeostatic adaptation (Di Paolo, 2000; Fernandez et al., 2014). The balance between robustness and adaptivity is often attained near “the edge of chaos” (Langton, 1990), and the corresponding phase transitions are typically detected via high sensitivities to underlying control parameters (thermodynamic variables) (Prokopenko et al., 2011). Stability in self-organizing systems can be generally related to negentropy, the entropy that the system exports (dissipates) to keep its own entropy low (Schrodinger, 1944). Despite significant advances in this direction, the fundamental question whether stability, within processes developing far from an equilibrium, necessitates specific entropy dynamics is still unanswered. Clarifying the connections between entropy dynamics and stable but adaptive behavior is one of the grand challenges for computational intelligence. Put simply, we need to know whether learning and self-organization necessitate phase transitions in certain spaces, in terms of some order parameters. Is it possible to characterize the richness of self-generated choice, intrinsic to intelligent behavior, with respect to generic thermodynamic principles? The notion of generating and actively exploiting new behaviors, which adequately match the environment highlights that to be intelligent is to be complex in creating innovations. And so a mechanism producing computational novelty needs to exceed some threshold of complexity. To be truly impressive in generating endogenous innovation, it needs to be capable of universal computation, or to approach this capability in finite implementations (Casti, 1994; Markose, 2004). In other words, computational novelty may be fundamentally related to undecidability. Again, serious advances have been made in this foundational area of computer science. For example, Casti (1991) analyzed deeper interconnections between dynamical systems, Turing Machines, and formal logic systems: in particular, the complex, class IV, cellular automata were related to formal systems with undecidable statements (Godel’s incompleteness theorem) and the Halting Problem. Nevertheless, the question whether universal computation is the ultimate innovation-generator is still unresolved, offering another grand challenge: how computational intelligence, including mechanisms producing richness of choice and novelty, is related to
Read full abstract