Abstract

In this work, we present two parallel algorithms for the large-scale discrete Fourier transform (DFT) on Tensor Processing Unit (TPU) clusters. The two parallel algorithms are associated with two DFT formulations: one formulation, denoted as KDFT, is based on the Kronecker product; the other is based on the famous Cooley-Tukey algorithm and phase adjustment, denoted as FFT. Both KDFT and FFT formulations take full advantage of TPU's strength in matrix multiplications. The KDFT formulation allows direct use of nonuniform inputs without additional step. In the two parallel algorithms, the same strategy of data decomposition is applied to the input data. Through the data decomposition, the dense matrix multiplications in KDFT and FFT are kept local within TPU cores, which can be performed completely in parallel. The communication among TPU cores is achieved through the one-shuffle scheme in both parallel algorithms, with which sending and receiving data takes place simultaneously between two neighboring cores and along the same direction on the interconnect network. The one-shuffle scheme is designed for the interconnect topology of TPU clusters, minimizing the time required by the communication among TPU cores. Both KDFT and FFT are implemented in TensorFlow. The three-dimensional complex DFT is performed on an example of dimension 8192 ×8192 ×8192 with a full TPU Pod: the run time of KDFT is 12.66 seconds and that of FFT is 8.3 seconds. Scaling analysis is provided to demonstrate the high parallel efficiency of the two DFT implementations on TPUs.

Highlights

  • The discrete Fourier transform (DFT) is critical in many scientific and engineering applications, including time series and waveform analyses, convolution and correlation computations, solutions to partial differential equations, density function theory in first-principle calculations, spectrum analyzer, synthetic aperture radar, computed tomography, magnetic resonance imaging, and derivatives pricing [1]–[4]

  • The one-shuffle scheme is designed for the interconnect topology of Tensor Processing Unit (TPU) clusters, minimizing the time required by the communication among TPU cores

  • In addition to the fast algorithms, the performance of hardware accelerators has been steadily driving the efficiency enhancement of DFT computation: the first implementation of the fast Fourier transform (FFT) algorithm was realized on ILLIAC IV parallel computer [9], [10]; over the years, the DFT computation has been adapted to both shared-memory [11], [12] and distributedmemory architectures [13]–[17]

Read more

Summary

INTRODUCTION

The discrete Fourier transform (DFT) is critical in many scientific and engineering applications, including time series and waveform analyses, convolution and correlation computations, solutions to partial differential equations, density function theory in first-principle calculations, spectrum analyzer, synthetic aperture radar, computed tomography, magnetic resonance imaging, and derivatives pricing [1]–[4]. In witnessing how DFT computation benefits from the development of hardware accelerators, it is tempting to ask whether TPU can empower the large-scale DFT computation It is plausible with the following four reasons: (1) TPU is an ML application-specific integrated circuit (ASIC), devised for neural networks (NNs); NNs require massive amounts of multiplications and additions between the data and parameters and TPU can handle these computations in terms of matrix multiplications in a very efficient manner [29]; DFT can be formulated as matrix multiplications between the input data and the Vandemonde matrix; (2) TPU chips are connected directly to each other with dedicated, high-speed, and lowlatency interconnects, bypassing host CPU or any networking resources; the large-scale DFT computation can be distributed among multiple TPUs with minimal communication time and very high parallel efficiency; (3) the large capacity of the in-package memory of TPU makes it possible to handle large-scale DFT efficiently; and (4) TPU is programmable with software front ends such as TensorFlow [30] and PyTorch [31], both of which make it straightforward to implement the parallel algorithms of DFT on TPUs. all the aforementioned four reasons have been verified in the high-performance Monte Carlo simulations on TPUs [32], [33].

TPU SYSTEM ARCHITECTURE
KDFT FORMULATION
FFT FORMULATION
ONE-SHUFFLE SCHEME
IMPLEMENTATION OF THE PARALLEL ALGORITHM FOR KDFT
IMPLEMENTATION OF THE PARALLEL ALGORITHM FOR FFT
STRONG SCALING ANALYSIS OF 2D KDFT
STRONG SCALING ANALYSIS OF 3D KDFT
STRONG SCALING ANALYSIS OF 3D FFT
CONCLUSION AND DISCUSSION
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call