Multifocal multiview (MFMV) is an emerging high-dimensional optical data that allows to record richer scene information but yields huge volumes of data. To unveil its imaging mechanism, we present an angular-focal-spatial representation model, which decomposes high-dimensional MFMV data into angular, spatial, and focal dimensions. To construct a comprehensive MFMV dataset, we leverage representative imaging prototypes, including digital camera imaging, emerging plenoptic refocusing, and synthesized Blender 3D creation. It is believed to be the first-of-its-kind MFMV dataset in multiple acquisition ways. To efficiently compress MFMV data, we propose the first, to our knowledge, MFMV data compression scheme based on angular-focal-spatial representation. It exploits inter-view, inter-stack, and intra-frame predictions to eliminate data redundancy in angular, focal, and spatial dimensions, respectively. Experiments demonstrate the proposed scheme outperforms the standard HEVC and MV-HEVC coding methods. As high as 3.693 dB PSNR gains and 64.22% bitrate savings can be achieved.