Multimodal manifold modeling methods extend spectral geometry-aware data analysis to incorporate learning from multiple related and complementary modalities. However, most methods assume an equal number of homogeneous data samples in each modality and partial correspondences between modalities as prior knowledge. This work introduces two new multimodal modeling methods. The first method introduces a comprehensive framework for handling multimodal information in heterogeneous data without requiring specific prior knowledge. To achieve this, we begin by extracting local descriptors using spectral graph wavelet signatures (SGWS) to identify manifold localities. Then, we propose a manifold regularization framework that incorporates functional mapping between SGWS descriptors (FMBSD) to determine pointwise correspondences. The second method involves manifold regularized multimodal classification based on pointwise correspondences (M2CPC). This method addresses the challenge of multiclass classification in multimodal heterogeneous data by determining correspondences between modalities using the FMBSD method. Experimental results evaluating the FMBSD method on three widely used cross-modal retrieval datasets and evaluating the M2CPC method on three benchmark multimodal multiclass classification datasets demonstrate their effectiveness and superiority over state-of-the-art methods.