Abstract

In this paper we present a concrete design for a probabilistic (p-) computer based on a network of p-bits, robust classical entities fluctuating between -1 and +1, with probabilities that are controlled through an input constructed from the outputs of other p-bits. The architecture of this probabilistic computer is similar to a stochastic neural network with the p-bit playing the role of a binary stochastic neuron, but with one key difference: there is no sequencer used to enforce an ordering of p-bit updates, as is typically required. Instead, we explore \textit{sequencerless} designs where all p-bits are allowed to flip autonomously and demonstrate that such designs can allow ultrafast operation unconstrained by available clock speeds without compromising the solution's fidelity. Based on experimental results from a hardware benchmark of the autonomous design and benchmarked device models, we project that a nanomagnetic implementation can scale to achieve petaflips per second with millions of neurons. A key contribution of this paper is the focus on a hardware metric $-$ flips per second $-$ as a problem and substrate-independent figure-of-merit for an emerging class of hardware annealers known as Ising Machines. Much like the shrinking feature sizes of transistors that have continually driven Moore's Law, we believe that flips per second can be continually improved in later technology generations of a wide class of probabilistic, domain specific hardware.

Highlights

  • S TOCHASTIC artificial neural networks (ANN) have broad utility in optimization and machine learning (ML) tasks such as inference and learning [1]

  • EMULATION FRAMEWORK Having established a model for the autonomous p-bit operation, we describe the design and implementation of an FPGA based framework to explore the performance, scalability, and other characteristics of an autonomous p-computer (ApC)

  • This comparison applies some artificial constraints on the Janus II design: namely that all spins must be simultaneously resident in the device at one time. This is a requirement for unclocked autonomous designs that for the sake of comparison we applied to the sequenced design

Read more

Summary

Introduction

S TOCHASTIC artificial neural networks (ANN) have broad utility in optimization and machine learning (ML) tasks such as inference and learning [1]. Ising Machines designed to solve hard problems in combinatorial optimization continue to emerge using a wide-range of underlying technologies. Solvers for such problems have been explored using quantum effects, optical approaches, digital logic, and magnetic technologies [5]– [18]. These systems map a given optimization problem onto a hardware whose operation is guided by a cost function [19], [20]

Objectives
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.