Abstract

The inability of conventional electronic architectures to efficiently solve large combinatorial problems motivates the development of novel computational hardware. There has been much effort toward developing application-specific hardware across many different fields of engineering, such as integrated circuits, memristors, and photonics. However, unleashing the potential of such architectures requires the development of algorithms which optimally exploit their fundamental properties. Here, we present the Photonic Recurrent Ising Sampler (PRIS), a heuristic method tailored for parallel architectures allowing fast and efficient sampling from distributions of arbitrary Ising problems. Since the PRIS relies on vector-to-fixed matrix multiplications, we suggest the implementation of the PRIS in photonic parallel networks, which realize these operations at an unprecedented speed. The PRIS provides sample solutions to the ground state of Ising models, by converging in probability to their associated Gibbs distribution. The PRIS also relies on intrinsic dynamic noise and eigenvalue dropout to find ground states more efficiently. Our work suggests speedups in heuristic methods via photonic implementations of the PRIS.

Highlights

  • The inability of conventional electronic architectures to efficiently solve large combinatorial problems motivates the development of novel computational hardware

  • This photonic network can map arbitrary Ising Hamiltonians described by Eq (1), with Kii = 0

  • The spin state at time step t, encoded in the phase and amplitude of N parallel photonic signals S(t) ∈ {0, 1}N, first goes through a linear symmetric transformation decomposed in its eigenvalue form 2J = USqα(D) U†, where Sqα(D) is a diagonal matrix derived from D, whose design will be discussed in the paragraphs

Read more

Summary

Introduction

The inability of conventional electronic architectures to efficiently solve large combinatorial problems motivates the development of novel computational hardware. Broad classes of problems in statistical physics, such as growth patterns in clusters[3], percolation[4], heterogeneity in lipid membranes[5], and complex networks[6], can be described by heuristic methods These methods have proven instrumental for predicting phase transitions and the critical exponents of various universality classes – families of physical systems exhibiting similar scaling properties near their critical temperature[1]. Half a century before the contemporary Machine Learning Renaissance[13], the Little[14] and the Hopfield[15,16] networks were considered as early architectures of recurrent neural networks (RNN) The latter was suggested as an algorithm to solve combinatorially hard problems, as it was shown to deterministically converge to local minima of arbitrary quadratic Hamiltonians of the form. This analogy between statistical physics and computer science has nurtured a great variety of concepts in both fields[18], for instance, the analogy between neural networks and spin glasses[15,19]

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call