With the recent advancement of LiDAR data processing pipelines integrated with signal processing techniques, it is now possible to rapidly acquire, process, and infer multidimensional information of 3D surroundings. This work proposes a novel compressive sensing (CS) framework (named MEMD-CSNet), based on Multivariate Empirical Mode Decomposition (MEMD) for adaptive 3D reconstruction of airborne urban LiDAR point clouds. EMD is a data-driven transform for decomposing the input data into Intrinsic Mode Functions (IMFs), capturing the implicit features in terms of variation in spatial frequency. Our approach generates a new filtered 3D signal constituting an optimal subset of IMFs based on the understanding of their statistical significance. Further, this filtered signal hierarchically extracts the overall shape in the last IMF (Residual function). Our proposed MEMD-CSNet is a deep convolutional network that (1) computes a multivariate EMD on input raw 3D LiDAR point clouds for generating data-dependent feature-rich filtered signal and (2) additionally augments this signal with convolutional features, thus learning a data-centric representation function (Ψ) for transforming the input into a low-dimensional sparse basis. We present extensive qualitative and quantitative analysis using the proposed MEMD-CSNet for a set of chosen CS measurement ratios. MEMD-CSNet achieves a maximum reconstruction PSNR of 108.22 dB (approx. 30 dB/1.36x better than without using MEMD) for reconstruction fidelity on selected urban LiDAR scenes. We envisage that our proposed MEMD-CSNet has a high potential (as a data-centric framework) for generating data-driven sparse feature maps and reconstructing entire 3D LiDAR point clouds from a few sparse measurements.