Computer-generated holography (CGH) is a promising technology for augmented reality displays, such as head-mounted or head-up displays. However, its high computational demand makes it impractical for implementation. Recent efforts to integrate neural networks into CGH have successfully accelerated computing speed, demonstrating the potential to overcome the trade-off between computational cost and image quality. Nevertheless, deploying neural-network-based CGH algorithms on computationally limited embedded systems requires more efficient models with lower computational cost, memory footprint, and power consumption. In this study, we developed a lightweight model for complex hologram generation by introducing neural network quantization. Specifically, we built a model based on tensor holography and quantized it from 32-bit floating-point precision (FP32) to 8-bit integer precision (INT8). Our performance evaluation shows that the proposed INT8 model achieves hologram quality comparable to that of the FP32 model while reducing the model size by approximately 70% and increasing the speed fourfold. Additionally, we implemented the INT8 model on a system-on-module to demonstrate its deployability on embedded platforms and high power efficiency.
Read full abstract