Neuromorphic complexity theory: computational models and complexity classes for spiking neural networks*
Abstract This paper extends prior work on the computational power and complexity of spiking neural networks (SNNs) by introducing a refined theoretical framework. We present three models for neuromorphic computation — spike-input, pre-processing, and interactive — and formalise how these models relate to complexity classes defined by time, space, and energy constraints. We show that spike waves can be reduced to singleton spike trains with only linear overhead and establish structural results connecting spiking machines to Boolean circuits and non-uniform complexity classes. The paper concludes with completeness results for neuromorphic complexity classes and outlines open problems regarding energy complexity and resource trade-offs.
- Conference Article
18
- 10.1145/3381755.3381760
- Mar 17, 2020
The last decade has seen the rise of neuromorphic architectures based on artificial spiking neural networks, such as the SpiNNaker, TrueNorth, and Loihi systems. The massive parallelism and co-locating of computation and memory in these architectures potentially allows for an energy usage that is orders of magnitude lower compared to traditional Von Neumann architectures. However, to date a comparison with more traditional computational architectures (particularly with respect to energy usage) is hampered by the lack of a formal machine model and a computational complexity theory for neuromorphic computation. In this paper we take the first steps towards such a theory. We introduce spiking neural networks as a machine model where---in contrast to the familiar Turing machine---information and the manipulation thereof are co-located in the machine. We introduce canonical problems, define hierarchies of complexity classes and provide some first completeness results.
- Book Chapter
4
- 10.1007/978-3-030-13435-8_10
- Jan 1, 2019
The state complexity of a finite(-state) automaton intuitively measures the size of the description of the automaton. Sakoda and Sipser [STOC 1972, pp. 275–286] were concerned with nonuniform families of finite automata and they discussed the behaviors of nonuniform complexity classes defined by families of such finite automata having polynomial-size state complexity. In a similar fashion, we introduce nonuniform state complexity classes using families of quantum finite automata. Our primarily concern is one-way quantum finite automata empowered by garbage tapes. We show inclusion and separation relationships among nonuniform state complexity classes of various one-way finite automata, including deterministic, nondeterministic, probabilistic, and quantum finite automata of polynomial size. For two-way quantum finite automata equipped with garbage tapes, we discover a close relationship between the nonuniform state complexity of such a polynomial-size quantum finite automata family and the parameterized complexity class induced by quantum logarithmic-space computation assisted by polynomial-size advice.
- Research Article
7
- 10.1007/s11023-020-09545-4
- Nov 9, 2020
- Minds and Machines
In computational complexity theory, decision problems are divided into complexity classes based on the amount of computational resources it takes for algorithms to solve them. In theoretical computer science, it is commonly accepted that only functions for solving problems in the complexity class P, solvable by a deterministic Turing machine in polynomial time, are considered to be tractable. In cognitive science and philosophy, this tractability result has been used to argue that only functions in P can feasibly work as computational models of human cognitive capacities. One interesting area of computational complexity theory is descriptive complexity, which connects the expressive strength of systems of logic with the computational complexity classes. In descriptive complexity theory, it is established that only first-order (classical) systems are connected to P, or one of its subclasses. Consequently, second-order systems of logic are considered to be computationally intractable, and may therefore seem to be unfit to model human cognitive capacities. This would be problematic when we think of the role of logic as the foundations of mathematics. In order to express many important mathematical concepts and systematically prove theorems involving them, we need to have a system of logic stronger than classical first-order logic. But if such a system is considered to be intractable, it means that the logical foundation of mathematics can be prohibitively complex for human cognition. In this paper I will argue, however, that this problem is the result of an unjustified direct use of computational complexity classes in cognitive modelling. Placing my account in the recent literature on the topic, I argue that the problem can be solved by considering computational complexity for humanly relevant problem solving algorithms and input sizes.
- Book Chapter
7
- 10.1007/978-3-030-19311-9_20
- Jan 1, 2019
Theory of relativization provides profound insights into the structural properties of various collections of mathematical problems by way of constructing desirable oracles that meet numerous requirements of the problems. This is a meaningful way to tackle unsolved questions on relationships among computational complexity classes induced by machine-based computations that can relativize. Slightly different from an early study on relativizations of uniform models of finite automata in [Tadaki, Yamakami, and Li (2010); Yamakami (2014)], we intend to discuss relativizations of state complexity classes (particularly, \(1\mathrm {BQ}\) and \(2\mathrm {BQ}\)) defined in terms of nonuniform families of time-unbounded quantum finite automata with polynomially many inner states. We create various relativized worlds where certain nonuniform state complexity classes become equal or different. By taking a nonuniform family of promise decision problems as an oracle, we can define a Turing reduction witnessed by a certain nonuniform finite automata family. We demonstrate closure properties of certain nonuniform state complexity classes under such reductions. Turing reducibility further enables us to define a hierarchy of nonuniform nondeterministic state complexity classes.
- Research Article
18
- 10.1007/s11225-010-9254-6
- May 16, 2010
- Studia Logica
Earlier, we have studied computations possible by physical systems and by algorithms combined with physical systems. In particular, we have analysed the idea of using an experiment as an oracle to an abstract computational device, such as the Turing machine. The theory of composite machines of this kind can be used to understand (a) a Turing machine receiving extra computational power from a physical process, or (b) an experimenter modelled as a Turing machine performing a test of a known physical theory T. Our earlier work was based upon experiments in Newtonian mechanics. Here we extend the scope of the theory of experimental oracles beyond Newtonian mechanics to electrical theory. First, we specify an experiment that measures resistance using a Wheatstone bridge and start to classify the computational power of this experimental oracle using non-uniform complexity classes. Secondly, we show that modelling an experimenter and experimental procedure algorithmically imposes a limit on our ability to measure resistance by the Wheatstone bridge. The connection between the algorithm and physical test is mediated by a protocol controlling each query, especially the physical time taken by the experimenter. In our studies we find that physical experiments have an exponential time protocol; this we formulate as a general conjecture. Our theory proposes that measurability in Physics is subject to laws which are co-lateral effects of the limits of computability and computational complexity.
- Research Article
16
- 10.1098/rsta.2011.0427
- Jul 28, 2012
- Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences
We developed earlier a theory of combining algorithms with physical systems, on the basis of using physical experiments as oracles to algorithms. Although our concepts and methods are general, each physical oracle requires its own analysis, on the basis of some fragment of physical theory that specifies the equipment and its behaviour. For specific examples of physical systems (mechanical, optical, electrical), the computational power has been characterized using non-uniform complexity classes. The powers of the known examples vary according to assumptions on precision and timing but seem to lead to the same complexity classes, namely P/log* and BPP//log*. In this study, we develop sets of axioms for the interface between physical equipment and algorithms that allow us to prove general characterizations, in terms of P/log* and BPP//log*, for large classes of physical oracles, in a uniform way. Sufficient conditions on physical equipment are given that ensure a physical system satisfies the axioms.
- Research Article
11
- 10.3390/s23187701
- Sep 6, 2023
- Sensors (Basel, Switzerland)
Over the past decade, the artificial neural networks domain has seen a considerable embracement of deep neural networks among many applications. However, deep neural networks are typically computationally complex and consume high power, hindering their applicability for resource-constrained applications, such as self-driving vehicles, drones, and robotics. Spiking neural networks, often employed to bridge the gap between machine learning and neuroscience fields, are considered a promising solution for resource-constrained applications. Since deploying spiking neural networks on traditional von-Newman architectures requires significant processing time and high power, typically, neuromorphic hardware is created to execute spiking neural networks. The objective of neuromorphic devices is to mimic the distinctive functionalities of the human brain in terms of energy efficiency, computational power, and robust learning. Furthermore, natural language processing, a machine learning technique, has been widely utilized to aid machines in comprehending human language. However, natural language processing techniques cannot also be deployed efficiently on traditional computing platforms. In this research work, we strive to enhance the natural language processing traits/abilities by harnessing and integrating the SNNs traits, as well as deploying the integrated solution on neuromorphic hardware, efficiently and effectively. To facilitate this endeavor, we propose a novel, unique, and efficient sentiment analysis model created using a large-scale SNN model on SpiNNaker neuromorphic hardware that responds to user inputs. SpiNNaker neuromorphic hardware typically can simulate large spiking neural networks in real time and consumes low power. We initially create an artificial neural networks model, and then train the model using an Internet Movie Database (IMDB) dataset. Next, the pre-trained artificial neural networks model is converted into our proposed spiking neural networks model, called a spiking sentiment analysis (SSA) model. Our SSA model using SpiNNaker, called SSA-SpiNNaker, is created in such a way to respond to user inputs with a positive or negative response. Our proposed SSA-SpiNNaker model achieves 100% accuracy and only consumes 3970 Joules of energy, while processing around 10,000 words and predicting a positive/negative review. Our experimental results and analysis demonstrate that by leveraging the parallel and distributed capabilities of SpiNNaker, our proposed SSA-SpiNNaker model achieves better performance compared to artificial neural networks models. Our investigation into existing works revealed that no similar models exist in the published literature, demonstrating the uniqueness of our proposed model. Our proposed work would offer a synergy between SNNs and NLP within the neuromorphic computing domain, in order to address many challenges in this domain, including computational complexity and power consumption. Our proposed model would not only enhance the capabilities of sentiment analysis but also contribute to the advancement of brain-inspired computing. Our proposed model could be utilized in other resource-constrained and low-power applications, such as robotics, autonomous, and smart systems.
- Conference Article
5
- 10.1109/sct.1990.113979
- Jul 8, 1990
A novel concept of nonuniform complexity classes is presented, from which a uniform way of describing known complexity classes is obtained. The usual nonuniform complexity classes introduced by R.M. Karp and R.J. Lipton are modified in two ways: first, not only classes of advice functions defined by pure length bounds are considered, but also limiting the complexity of these functions. In a second step, the advice functions are extended to be functions of the input and not only of the length of the input. With this concept, the certain known language classes are characterized in terms of NP with the advice of certain optimization functions in OptP or of the n-ary characteristic function of SAT. >
- Research Article
14
- 10.1016/0304-3975(91)90185-5
- Aug 1, 1991
- Theoretical Computer Science
Nonuniform proof systems: a new framework to describe nonuniform and probabilistic complexity classes
- Book Chapter
8
- 10.1007/3-540-50517-2_81
- Jan 1, 1988
The concept of non-uniform proof systems is introduced. This notion allows a uniform description of non-uniform complexity classes [10], probabilistic classes (e.g. BPP [8,15,27,32], AM [2,3]) and language classes defined by simultaneous non-uniform and nondeterministic time bounds. Non-uniform proof systems provide a better understanding of many results concerning these classes, particularly their connections to uniform complexity measures. We give an uniform approach to lowness results [19,20] for various complexity classes. For instance, we show that co-NP/Poly ∩ NP is contained in the third level of the low hierarchy and that, NP c (NP ∩ co-NP)/Poly implies that the polynomial time hierarchy collapses to its second level (see also [1]). Finally, some evidence is given that the low hierarchy cannot be extended beyond its third level by the current techniques.
- Book Chapter
9
- 10.1007/11839132_4
- Jan 1, 2006
This work concerns the computational complexity of a model of computation that is inspired by optical computers. The model is called the continuous space machine and operates in discrete timesteps over a number of two-dimensional images of fixed size and arbitrary spatial resolution. The (constant time) operations on images include Fourier transformation, multiplication, addition, thresholding, copying and scaling. We survey some of the work to date on the continuous space machine. This includes a characterisation of the power of an important discrete restriction of the model. Parallel time corresponds, within a polynomial, to sequential space on Turing machines, thus satisfying the parallel computation thesis. A characterisation of the complexity class NC in terms of the model is also given. Thus the model has computational power that is (polynomially) equivalent to that of many well-known parallel models. Such characterisations give a method to translate parallel algorithms to optical algorithms and facilitate the application of the complexity theory toolbox to optical computers. In the present work we improve on these results. Specifically we tighten a lower bound and present some new resource trade-offs.
- Dissertation
- 10.6092/unibo/amsdottorato/5573
- Apr 8, 2013
The thesis applies the ICC tecniques to the probabilistic polinomial complexity classes in order to get an implicit characterization of them. The main contribution lays on the implicit characterization of PP (which stands for Probabilistic Polynomial Time) class, showing a syntactical characterisation of PP and a static complexity analyser able to recognise if an imperative program computes in Probabilistic Polynomial Time. The thesis is divided in two parts. The first part focuses on solving the problem by creating a prototype of functional language (a probabilistic variation of lambda calculus with bounded recursion) that is sound and complete respect to Probabilistic Prolynomial Time. The second part, instead, reverses the problem and develops a feasible way to verify if a program, written with a prototype of imperative programming language, is running in Probabilistic polynomial time or not. This thesis would characterise itself as one of the first step for Implicit Computational Complexity over probabilistic classes. There are still open hard problem to investigate and try to solve. There are a lot of theoretical aspects strongly connected with these topics and I expect that in the future there will be wide attention to ICC and probabilistic classes.
- Book Chapter
2
- 10.1007/978-3-642-13962-8_35
- Jan 1, 2010
Within the framework of membrane systems, distributed parallel computing models inspired by the functioning of the living cell, various computational complexity classes have been defined, which can be compared against the computational complexity classes defined for Turing machines. Here some issues and results concerning computational complexity of membrane systems are discussed. In particular, we focus our attention on the comparison among complexity classes for membrane systems with active membranes (where new membranes can be created by division of membranes which exist in the system in a given moment) and the classes PSPACE, EXP, and EXPSPACE.
- Book Chapter
9
- 10.1007/978-3-540-73420-8_18
- Jan 1, 2007
We study a fundamental result of Impagliazzo (FOCS’95) known as the hard-core set lemma. Consider any function f:{0,1}n →{0,1} which is “mildly-hard”, in the sense that any circuit of size s must disagree with f on at least δ fraction of inputs. Then the hard-core set lemma says that f must have a hard-core set H of density δ on which it is “extremely hard”, in the sense that any circuit of sizes’= \( {O} {(s/({{1}\over{\varepsilon^2}}\log(\frac{1}{\varepsilon\delta})))}\) must disagree with f on at least (1 − ε)/2 fraction of inputs from H.There are three issues of the lemma which we would like to address: the loss of circuit size, the need of non-uniformity, and its inapplicability to a low-level complexity class. We introduce two models of hard-core set constructions, a strongly black-box one and a weakly black-box one, and show that those issues are unavoidable in such models.First, we show that in any strongly black-box construction, one can only prove the hardness of a hard-core set for smaller circuits of size at most \(s'=O(s/(\frac{1}{\varepsilon^2}log\frac{1}{\delta}))\). Next, we show that any weakly black-box construction must be inherently non-uniform — to have a hard-core set for a class G of functions, we need to start from the assumption that f is hard against a non-uniform complexity class with \(\Omega(\frac{1}{\varepsilon}log|G|)\) bits of advice. Finally, we show that weakly black-box constructions in general cannot be realized in a low-level complexity class such as AC 0[p] — the assumption that f is hard for AC 0[p] is not sufficient to guarantee the existence of a hard-core set.KeywordsBoolean FunctionIEEE Computer SocietyMajority FunctionQuery ComplexityHard FunctionThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
- Book Chapter
8
- 10.1007/3-540-52753-2_38
- Jan 1, 1990
A logical framework is introduced which captures the behaviour of oracle machines and gives logical descriptions of complexity classes that are defined by oracle machines. Using this technique the notion of first-order selfreducibility is investigated and applied to obtain a structural result about non-uniform complexity classes below P.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.