Abstract

AbstractSeveral multimedia applications have recently been implemented on mobile devices, including digital image compression, video compression, and audio processing. Furthermore, Artificial Intelligence (AI) processing has grown in popularity, necessitating the execution of large amounts of data in mobile devices. Therefore, the processing core in a mobile device requires high performance, programmability, and versatility. Multimedia apps for mobile devices typically comprise repeated arithmetic and table‐lookup coding operations. A Content Addressable Memory‐based massive‐parallel SIMD matriX core (CAMX) is presented to increase the processing speed of both operations on a processing core. The CAMX serves as a CPU core accelerator for mobile devices. The CAMX supports high‐parallel processing and is equipped with two CAM modules for high‐speed repeated arithmetic and table‐lookup coding operations. The CAMX has great performance, programmability, and versatility on mobile devices because it can handle logical, arithmetic, search, and shift operations in parallel. This paper shows that the CAMX can process parallel repeated arithmetic and table‐lookup coding operations; single‐precision floating‐point arithmetic can calculate 1024 entries in 5613 clock cycles in parallel without embedding a dedicated floating‐point arithmetic unit. This clock cycle using two's complement‐reduced floating‐point addition implementation decreases 59% than the implementation of straight‐forward floating‐point addition. The implementation of straight‐forward floating‐point additions is improved as two's complement instruction reduced algorithms. Thus, this paper proposes an instruction reduction architecture by modulating the CAMX to directly access the data in the left and right CAM modules from the preserve register. The CAMX has achieved high performance, programmability, and versatility by not embedding a dedicated processing unit. Moreover, assuming the CAMX processes at an operating frequency of 0.1, 0.5, 1.0, or 1.5 GHz, it can process floating‐point additions above approximately 4500 parallelized data, with better performance than an ARM core using NEON and Vector Floating‐Point (VFP). In addition, related works executed by software instruction, dedicated floating‐point arithmetic unit, or both and the CAMX are compared while assuming the same operation frequency. From this result, the CAMX which has 128‐bit and 1024‐entry CAM modules achieves higher performance than the related works executed by only software instructions and by combining software instructions and a dedicated floating‐point arithmetic unit. © 2023 Institute of Electrical Engineers of Japan. Published by Wiley Periodicals LLC.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call