Abstract

Combinatorial optimization problems (COP) are ubiquitous and practical in our daily life, but many of them are NP-hard and intractable to solve using conventional computing. Ising model-based annealer has gained increasing attention due to its efficiency in finding fast and approximate solutions. However, such solvers are suffering from scalability problem due to quadratically increasing spins, and there are few hardware designs proposed to support large-scale COPs. In this paper, we propose scalable in-memory annealers to solve the large-scale travelling salesman problem (TSP) with crossbar arrays of FinFET based charge trap transistors. The intrinsic temporal noise of the drain current caused by trapping/detrapping is used to realize the annealing process. We apply two kinds of hardware implementations to map Ising model: 1) Hopfield neural network (HNN) based design, converting from a graph network to a regular neural network to utilize crossbar memory arrays; 2) index-based design, using index to record the order to avoid the quadratic scaling up of spins to represent all permutations. For both implementations, hierarchical clustering algorithms are adopted to enable the solving of large-scale TSP and overcome the scalability challenge. We further speed up the system convergence by updating non-adjacent clusters in parallel with equal size k-means clustering. At last, three hardware designs (HNN with conventional and equal-size clustering, index method with conventional clustering) are tested with up to 1060-city problems and benchmarked by the system-level simulation to show their trade-offs. They all show benefits on fast convergence and ultra-low energy compared with state-of-the-art annealers, and present advantages on accuracy, chip area, or latency/energy consumption, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call