Abstract

Theoretical models of computation (automata, Turing machines, random access machines, etc.) define not only what is computable but determine how much time and/or space is required for a computation. The parallel extensions of the sequential models usually offer significant speedup in algorithm execution (not all algorithms can be parallelized, however). Real parallel computer architectures based on these parallel computation models should exhibit the same algorithmic speedup. A complexity hierarchy can be defined for parallel computation models based on the space and/or time speedup of certain classes of algorithms. In particular the parallel shared memory models1 are some of the more powerful of the parallel computation models. Their close resemblance to actual computer architectures provide guidance for real architectural design and their massive interconnection requirements are well suited for implementation in optical computing architectures. In addition, the shared memory models have recently been shown to simulate deterministic Boltzmann machines and threshold-logic Turing machines. Thus optical machines based on these parallel models can potentially exhibit the computational power of the shared memory models. These parallel computation models and their implications for optical computing are discussed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.