This study compares the utility of multifrequency SAR and Optical multispectral data for land-cover classification of Mumbai city and its nearby regions with a special focus on water body mapping. The L-band ALOS-2 PALSAR-2, X-band TerraSAR-X, C-band RISAT-1, and Sentinel-2 datasets have been used in this work. This work is done as a retrospective study for the dual-frequency L and S-band NASA-ISRO Synthetic Aperture Radar (NISAR) mission. The ALOS-2 PALSAR-2 data has been pre-processed before the implementation of machine learning algorithms for image segmentation. Multi-looking is performed on ALOS-2 PALSAR-2 data to generate square pixels of size 5.78 m and then target decomposition is applied to generate a false-color composite RGB image. While in the case of TerraSAR-X and RISAT-1 datasets, no multi-looking was performed and direct target decomposition was applied to generate false-color composite RGB images. Similarly, for the optical dataset that has a resolution of 10 m, a true color composite, and a false color composite RGB image are generated. For the comparative study between ALOS-2 PALSAR-2 and Sentinel-2 dataset, the RGB images are divided into smaller chunks of size 500*500 pixels each to create a training and testing dataset. Ten image patches were taken from the large dataset, out of which eight patches were used to train the machine learning models Random Forest (RF), K Nearest Neighbor (KNN), and Support Vector Machine (SVM), and two patches were kept for testing and validation purpose. For training the machine learning models, feature vectors are generated using the Gabor filter, Scharr filter, Gaussian filter, and Median filter. For patch 1, the mIOU for true-color composite based Optical image varies from 0.2323 to 0.2866 with the RF classifier performing the best and the mIOU for false-color composite based Optical image varies from 0.4130 to 0.4941 with the RF classifier performing the best while for ALOS-2 PALSAR-2 data, the mIOU varies from 0.4033 to 0.4663 with the RF classifier outperforming the KNN and the SVM classifiers. For patch 2, the mIOU for true-color composite based Optical data varies from 0.3451 to 0.4517 with KNN performing the best and the mIOU for false-color composite based Optical image varies from 0.5156 to 0.5832 with the RF classifier performing the best while for ALOS-2 PALSAR-2 data, the mIOU varies from 0.4600 to 0.5178 with the RF classifier outperforming the KNN and the SVM classifiers. The gap between the performance of ALOS-2 PALSAR-2 data and Sentinel-2 optical data is observed when the IOU of water class is compared, with IOUw for the true-color composite based optical image at a maximum of 0.2525 and for false-color composite based optical image at a maximum of 0.7366 while for ALOS-2 PALSAR-2 data a maximum IOUw of 0.7948 is achieved. The better performance of SAR data as compared to true-color composite based optical image data is due to the misclassification of ground and water classes into urban and forest in the case of the true-color composite based optical dataset which can be attributed to the high similarity between water and forest classes in the case of true-color composite based optical data whereas both these classes are easily separable in case of SAR data. This issue is however resolved by using the false color composite based optical image dataset for the classification task which performs slightly better than ALOS-2 PALSAR-2 data in the overall classification task. However, the SAR data works best in water body detection as notable from the high IOU for water class in the case of SAR data. In addition to the comparative analysis between Sentinel-2 optical and ALOS-2 PALSAR-2 data, land-cover classification has been performed on X-band TerraSAR-X and C-band RISAT-1 data on a single patch and it has been found that the RF classifier performs the best, recording the mIOU 0.5815 for X-band TerraSAR-X data, mIOU of 0.4031 for the C-band RISAT-1 data, and mIOU of 0.6153 for the L-band ALOS-2 data.