As 3D content is becoming popular, the data size of such content is getting larger, which may cause a critical bandwidth problem in deploying 3D broadcast services. View synthesis using depth maps can play a key role in avoiding the bandwidth problem. In this paper, we propose a universal view synthesis unit (UVSU), which allows depth image-based fast view synthesis by parallel processing using a programmable graphic process unit (GPU). Assuming that a few stereo images and their corresponding disparity maps are given, we synthesize multiple virtual viewpoint images in realtime. Moreover, the proposed UVSU can freely adjust various requirements, such as the number of virtual viewpoints, their positions, and the intervals of each virtual viewpoint depending on 3D display devices. The effectiveness of our approach is verified through many experiments with various real images.