Content-Based Image Retrieval (CBIR) is a technique that involves retrieving similar images from a large database by analysing the content features of the query image. The heavy usage of digital platforms and devices has in a way promoted CBIR and its allied technologies in computer vision and artificial intelligence. The process entails comparing the representative features of the query image with those of the images in the dataset to rank them for retrieval. Past research was centered around handcrafted feature descriptors based on traditional visual features. But with the advent of deep learning the traditional manual method of feature engineering gave way to automatic feature extraction. In this study, a cascaded network is utilised for CBIR. In the first stage, the model employs multi-modal features from variational autoencoders and super-pixelated image characteristics to narrow down the search space. In the subsequent stage, an end-to-end deep learning network known as a Convolutional Siamese Neural Network (CSNN) is used. The concept of pseudo-labeling is incorporated to categorise images according to their affinity and similarity with the query image. Using this pseudo-supervised learning approach, this network evaluates the similarity between a query image and available image samples. The Siamese network assigns a similarity score to each target image, and those that surpass a predefined threshold are ranked and retrieved. The suggested CBIR system undergoes testing on a widely recognized public dataset: the Oxford dataset and its performance is measured against cutting-edge image retrieval methods. The findings reveal substantial enhancements in retrieval performance in terms of several standard benchmarks such as average precision, average error rate, average false positive rate etc., providing strong support for utilising images from interconnected devices.