Abstract

ABSTRACT With the advancement of graphic engines, real-life structures can be digitized with more realistic representations than before. Virtual models obtained from LiDAR (Light Detection and Ranging) data in real-time applications can be inspected in graphic engines without rendering a point cloud. Well-known proprietary software is used to convert scanning from LiDAR into meshes of triangles that work the best on graphic pipelines. However proprietary software is usually expensive, hard to learn, and requires manual interaction. The proposed methodology generates virtual models from LiDAR with little manual interaction employing open-source software in an automated workflow for generic conversion. The point cloud is registered for geo-reference, processed for building textured models, and implemented in Unreal Engine 5 for Virtual Reality deployment. Specific improvements were made to the selected study case of the Castro of Santa Trega. Visualization of the model is overall more realistic than the rendering of every point in a cloud. The average framerate is improved upon a 229% when rendering optimized meshes compared to point clouds, leading to an enriched visualization quality and reduced data size. A Virtual Reality (VR) experience was implemented with an average of 143 FPS, surpassing the standard 90 FPS recommended to avoid motion sickness.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call