Abstract. In this contribution, we propose a versatile image-based methodology for 3D reconstructing underwater scenes of high fidelity and integrating them into a virtual reality environment. Typically, underwater images suffer from colour degradation (blueish images) due to the propagation of light through water, which is a more absorbing medium than air, as well as the scattering of light on suspended particles. Other factors, such as artificial lights, also, diminish the quality of images and, thus, the quality of the image-based 3D reconstruction. Moreover, degraded images have a direct impact on the user perception of the virtual environment, due to geometric and visual degenerations. Here, it is argued that these can be mitigated by image pre-processing algorithms and specialized filters. The impact of different filtering techniques on images is evaluated, in order to eliminate colour degradation and mismatches in the image sequences. The methodology in this work consists of five sequential pre-processes; saturation enhancement, haze reduction, and Rayleigh distribution adaptation, to de-haze the images, global histogram matching to minimize differences among images of the dataset, and image sharpening to strengthen the edges of the scene. The 3D reconstruction of the models is based on open-source structure-from-motion software. The models are optimized for virtual reality through mesh simplification, physically based rendering texture maps baking, and level-of-details. The results of the proposed methodology are qualitatively evaluated on image datasets captured in the seabed of Santorini island in Greece, using a ROV platform.