Mixed reality-information modeling integrated workflow and application for building construction progress monitoring
ABSTRACT Construction progress monitoring faces errors due to subjective reporting, lack of integrated visualization, and poor quantification of physical progress. Digitizing as-built construction progress versus the as-planned through immersive visualization can help in real-time progress measurement. Mixed Reality (MR), integrated with Building Information Modeling (BIM) can enable such digitization. Literature shows that workflow for real-time progress measurement in an MR environment, combining visualization of as-built and remaining as planned elements, is currently lacking. This study develops a workflow integrating MR and BIM for construction progress measurement, utilizing actual geometric measurements in an MR environment rather than binary approach of built vs. non-built elements. The Unity3D game engine was used to create an MR-based application, along-with Revit-based BIM for real-time immersive visualization and progress measurement. The application overlays remaining as-planned work over the as-built in real-time, along-with improving user experience to measure progress, and identify errors or clashes. Progress data can be extracted in spreadsheet for comparison, aiding further analysis. The developed workflow’s implementation on a case study building showed accuracy of 98.67%, having maximum error of 4.07%, hence, 99% confidence interval for progress measurement. This workflow can enhance construction management efficiency with improved progress data collection, comparison, and immersive visualization.
- Conference Article
25
- 10.1109/vr.2019.8798180
- Mar 1, 2019
Virtual reality (VR) provides a completely digital world of interaction which enables the users to modify, edit, and transform digital elements in a responsive way. Mixed reality (MR), which is the result of blending the digital world and the physical world together, brings new advancements and challenges to human, computer and environment interactions. This paper focuses on adapting the already-existing methods and tools in architecture to both VR and MR environments under sustainable architectural design domain. For this purpose, we benefit from the semantically enriched data platforms of Building information modelling (BIM) tools, the performance calculation functions of building energy simulation tools while transcending these data into VR and MR environments. In this way, we were able to merge these diverse data for the virtual design activity. Nine participants have already tested the initial prototype of MR-based only interaction environment in our previous study [1]. According to the feedbacks, the user interface and interaction mechanisms were updated and the environment was made accessible also in VR. These updates made four types of interactions possible in MR and VR: 1) MR environment using HoloLens with gestures, 2) MR environment using HoloLens with a clicker, 3) VR environment using HTC Vive with two controllers, and 4) HoloLens emulator with a mouse. All these interaction cases were tested by 21 architecture students in an in-house workshop. In this workshop, we collected data on presence, usability, and technology acceptance of these cases. Our results show that interaction in a VR environment is the most natural interaction type and the participants were eager to use both MR and VR environments instead of an emulator. To our best of knowledge, this is the first comparative study of a BIM-based architectural design medium in both VR and MR environments.
- Conference Article
4
- 10.24928/jc3-2017/0147
- Jul 4, 2017
In modern construction projects, architects, engineers, and designers use different methods of construction visualization to support the conceptualization and final appearance of design ideas. This includes the use of virtual Building Information Modelling (BIM) content, as well as physical mock-ups to support design visualization for decision-making prior to construction. Prior research has demonstrated a variety of benefits that BIM can provide for visualization. Mixed Reality (MR) may be able to offer some of the benefits of both purely physical mock-ups and purely virtual BIM walkthroughs. However, the prior studies used specific computing devices and MR applications for specific construction use-cases. The goals were to solve a specific problem, or to prove the concept that MR is possible for various uses. Therefore, it was necessary to develop the exact same MR environment that could run on different computing devices. This will allow for identification of the differences between different computing devices running the exact same MR environment. This paper presents a consistent methodology for leveraging existing BIM contents to generate marker-based MR environments on various commercially available computing devices. This study tests the methodology for development and validates it through successfully building and running the same MR environment on various devices. Additionally, challenges associated with implementing this visualization mode in design and constructability review sessions were highlighted. The research questions addressed include: 1) What are the steps needed for developing MR visualisation interfaces in design and constructability review sessions? and 2) What are the possible constraints that may influence MR performance on different mobile computers? The conclusion from this study will help researchers better understand the process for MR implementation and the limitations in using this visualization environment. Additionally, it may help to expand the use of MR interfaces for different construction use-cases.
- Research Article
- 10.3390/app15179713
- Sep 4, 2025
- Applied Sciences
This study proposes a Two-points Spatial Alignment System (TSAS) for accurate positioning of Building Information Modeling (BIM) objects in Mixed Reality (MR) environments at construction sites. Conventional spatial alignment methods present limitations: marker-based approaches require precise marker installation and setup in predefined locations, while drag-based methods rely considerably on user manipulation skills. TSAS utilizes Y-axis rotation and vector-based scaling mechanisms to facilitate alignment processes. Through usability evaluation with 30 participants in MR environments, TSAS demonstrated a performance with a 50.3 mm alignment error, compared to marker-based (64.0 mm) and drag methods (199.7 mm). A one-way Analysis of Variance (ANOVA) confirmed that these differences in accuracy were statistically significant (p < 0.001). Notably, TSAS meets the Korean building regulation’s tolerance while maintaining consistent accuracy in indoor environments. Although the marker method showed better efficiency in operation time, this evaluation excluded initial installation time requirements. The usability evaluation suggests this approach could be beneficial for BIM visualization and review processes in construction settings. Future research will focus on validating the system’s performance in diverse construction environments, including larger buildings and complex sites.
- Research Article
2
- 10.3390/aerospace9070340
- Jun 25, 2022
- Aerospace
In MR (mixed reality) environments, visual searches are often used for search and localization missions. There are some problems with search and localization technologies, such as a limited field of view and information overload. They are unable to satisfy the need for the rapid and precise location of specific flying objects in a group of air and space targets under modern air and space situational requirements. They lead to inefficient interactions throughout the mission process. A human being’s decision and judgment will be affected by inefficient interactions. Based on this problem, we carried out a multimodal optimization study on the use of an auditory-assisted visual search for localization in an MR environment. In the spatial–spherical coordinate system, the target flight object position is uniquely determined by the height h, distance r, and azimuth θ. Therefore, there is an urgent need to study the cross-modal connections between the auditory elements and these three coordinates based on a visual search. In this paper, an experiment was designed to study the correlation between auditory intuitive perception and vision and the cognitive induction mechanism. The experiment included the three cross-modal mappings of pitch–height, volume–distance, and vocal tract alternation–spatial direction. The research conclusions are as follows: (1) Visual cognition is induced by high, medium, and low pitches to be biased towards the high, medium, and low spatial regions of the visual space. (2) Visual cognition is induced by loud, medium, and low volumes to be biased towards the near, middle, and far spatial regions of the visual space. (3) Based on the HRTF application, the vocal track alternation scheme is expected to significantly improve the efficiency of visual interactions. Visual cognition is induced by left short sounds, right short sounds, left short and long sounds, and right short and long sounds to be biased towards the left, right, left-rear, and right-rear directions of visual space. (4) The cognitive load of search and localization technologies is significantly reduced by incorporating auditory factors. In addition, the efficiency and effect of the accurate search and positioning of space-flying objects have been greatly improved. The above findings can be applied to the research on various types of target search and localization technologies in an MR environment and can provide a theoretical basis for the subsequent study of spatial information perception and cognitive induction mechanisms in an MR environment with visual–auditory coupling.
- Research Article
34
- 10.1016/j.rcim.2022.102332
- Oct 1, 2022
- Robotics and Computer-Integrated Manufacturing
Mixed reality-integrated 3D/2D vision mapping for intuitive teleoperation of mobile manipulator
- Research Article
- 10.3389/fnins.2026.1713018
- Feb 27, 2026
- Frontiers in Neuroscience
Steady-state visually evoked potentials (SSVEP), owing to their high signal-to-noise ratio and low training cost, are widely regarded as an effective approach for constructing visually driven brain-computer interfaces (BCI), particularly in neurorehabilitation applications. However, the accommodation-vergence conflict (VAC) commonly present in mixed reality (MR) and virtual reality (VR) head-mounted displays may attenuate neural responses in the visual cortex, thereby compromising the long-term usability of such systems. This study aims to systematically evaluate the effects of MR and VR environments under different virtual depth conditions on SSVEP signal quality, classification performance, and visual comfort, providing parameter guidelines for the design of immersive visual BCIs in rehabilitation contexts. Green flickering stimuli at 7.5, 11.25, and 18 Hz were presented at three virtual depths of 0.4, 1.0, and 1.8 m. Feature extraction and classification were performed using canonical correlation analysis (CCA), Filter-Bank Canonical Correlation Analysis (FBCCA), and task-related component analysis (TRCA).The results showed a negative correlation between stimulus distance and SSVEP classification accuracy, with FBCCA achieving the highest accuracy at the 0.4 m depth (71.8% ± 33.8%). Overall, the signal-to-noise ratio (SNR) in the MR environment was higher than that in the VR environment, with the most pronounced difference observed under the 1.8 m condition, suggesting that MR is more effective in alleviating VAC and maintaining stable visual cortical responses. Among the three stimulation frequencies, 11.25 Hz elicited the highest SSVEP amplitude and SNR, indicating it as the optimal frequency band. Subjective visual fatigue assessments revealed higher scores for VR in terms of diplopia and fixation difficulty, with trends consistent with the observed SNR reduction. This study elucidates the interactive modulation effects of virtual depth, display modality, and flicker frequency on SSVEP, and demonstrates that MR outperforms VR in terms of signal stability, visual comfort, and potential rehabilitation usability. The derived parameters provide experimentally validated optimization strategies for stimulus depth and frequency in vision-based attention training, spatial orientation training, upper-limb interactive tasks, and immersive feedback systems in neurorehabilitation, thereby contributing to improved long-term adherence and clinical translational value of future rehabilitation BCI.
- Research Article
4
- 10.1108/ils-10-2020-0235
- Jul 9, 2021
- Information and Learning Sciences
PurposeThis paper aims to show how collective embodiment with physical objects (i.e. props) support young children’s learning through the construction of liminal blends that merge physical, virtual and conceptual resources in a mixed-reality (MR) environment..Design/methodology/approachBuilding on Science through Technology Enhanced Play (STEP), we apply the Learning in Embodied Activity Framework to further explore how liminal blends can help us understand learning within MR environments. Twenty-two students from a mixed first- and second-grade classroom participated in a seven-part activity sequence in the STEP environment. The authors applied interaction analysis to analyze how student’s actions performed with the physical objects helped them to construct liminal blends that allowed key concepts to be made visible and shared for collective sensemaking.FindingsThe authors found that conceptually productive liminal blends occurred when students constructed connections between the resources in the MR environment and coordinated their embodiment with props to represent new understandings.Originality/valueThis study concludes with the implications for how the design of MR environment and teachers’ facilitation in MR environment supports students in constructing liminal blends and their understanding of complex science phenomena.
- Research Article
- 10.3389/frvir.2025.1722248
- Jan 30, 2026
- Frontiers in Virtual Reality
In recent years, virtual, augmented, and mixed reality (MR) technologies have gained increasing attention in sports training for their potential to improve motor skills and team coordination. However, existing systems predominantly emphasize individual skills or small-scale settings, offering limited support for realistic multi-player, full-court tactical training. To address this gap, this paper proposes a large-scale mixed reality environment for training coordinated tactical plays in basketball. In this environment, players can practice various tactics such as ball passing and screens with virtual players and visual instructions overlaid in the real environment using a see-through head-mounted display. Two evaluation experiments were conducted. In Experiment 1, expert players performed pick-and-roll plays in both MR and real environments. The results showed no significant differences in execution time or movement trajectories, suggesting that the MR environment may offer spatial and temporal consistency comparable to real play. In Experiment 2, novice players trained with the MR system and with a conventional method in real space. The results showed higher improvements in both spatial positioning and timing in the MR environment, suggesting it could support the training of coordinated tactical plays. These findings suggest the potential of MR technology for skill training involving multiplayer coordination in realistic tactical scenarios.
- Research Article
73
- 10.1108/ci-04-2021-0069
- Sep 6, 2021
- Construction Innovation
PurposeThe purpose of this study is to develop a building information modelling (BIM)-based mixed reality (MR) application to enhance and facilitate the process of managing bridge inspection and maintenance works remotely from office. It aims to address the ineffective decision-making process on maintenance tasks from the conventional method which relies on documents and 2D drawings on visual inspection. This study targets two key issues: creating a BIM-based model for bridge inspection and maintenance; and developing this model in a MR platform based on Microsoft Hololens.Design/methodology/approachLiterature review is conducted to determine the limitation of MR technology in the construction industry and identify the gaps of integration of BIM and MR for bridge inspection works. A new framework for a greater adoption of integrated BIM and Hololens is proposed. It consists of a bridge information model for inspection and a newly-developed Hololens application named “HoloBridge”. This application contains the functional modules that allow users to check and update the progress of inspection and maintenance. The application has been implemented for an existing bridge in South Korea as the case study.FindingsThe results from pilot implementation show that the inspection information management can be enhanced because the inspection database can be systematically captured, stored and managed through BIM-based models. The inspection information in MR environment has been improved in interpretation, visualization and visual interpretation of 3D models because of intuitively interactive in real-time simulation.Originality/valueThe proposed framework through “HoloBridge” application explores the potential of integrating BIM and MR technology by using Hololens. It provides new possibilities for remote inspection of bridge conditions.
- Book Chapter
2
- 10.1007/978-3-030-51295-8_83
- Jul 14, 2020
In recent decades, technological advancements have led to the introduction of wearable computing devices allowing visualization using virtual, augmented, and mixed reality. The Architecture, Engineering, and Construction industry has seen an increase in the use of such wearable technology, especially with the introduction of information modeling. Building Information Modeling (BIM) has been used extensively on vertical construction projects to better communicate information among project stakeholders and facilitate the construction process. More recently, infrastructure projects started using information modeling along with Geographic Information Systems (GIS) on local or cloud services. The integration of GIS and BIM has the potential to improve the construction process and aid in decision-making, especially when combined with newer visualization techniques. This paper presents a platform for GIS and BIM integration in Mixed Reality. The proposed application will allow a seamless transfer of data from BIM and GIS software into game engines, and the visualization of the BIM-GIS integration on Microsoft HoloLens 2. The application will allow users not only to visualize models, but to explore information in model elements, and make model changes in the mixed reality environment.
- Research Article
1
- 10.1016/j.vrih.2023.11.001
- Apr 1, 2024
- Virtual Reality & Intelligent Hardware
Effects of virtual agents on interaction efficiency and environmental immersion in MR environments
- Research Article
1
- 10.3390/s22228931
- Nov 18, 2022
- Sensors
In the mixed reality (MR) environment, the task of target motion perception is usually undertaken by vision. This approach suffers from poor discrimination and high cognitive load when the tasks are complex. This cannot meet the needs of the air traffic control field for rapid capture and precise positioning of the dynamic targets in the air. Based on this problem, we conducted a multimodal optimization study on target motion perception judgment by controlling the hand tactile sensor to achieve the use of tactile sensation to assist vision in MR environment. This allows it to adapt to the requirements of future development-led interactive tasks under the mixed reality holographic aviation tower. Motion perception tasks are usually divided into urgency sensing for multiple targets and precise position tracking for single targets according to the number of targets and task division. Therefore, in this paper, we designed experiments to investigate the correlation between tactile intensity-velocity correspondence and target urgency, and the correlation between the PRS (position, rhythm, sequence) tactile indication scheme and position tracking. We also evaluated it through comprehensive experiment. We obtained the following conclusions: (1) high, higher, medium, lower, and low tactile intensities would bias human visual cognitive induction to fast, faster, medium, slower, and slow motion targets. Additionally, this correspondence can significantly improve the efficiency of the participants' judgment of target urgency; (2) under the PRS tactile indication scheme, position-based rhythm and sequence cues can improve the judgment effect of human tracking target dynamic position, and the effect of adding rhythm cues is better. However, when adding rhythm and sequence cues at the same time, it can cause clutter; (3) tactile assisted vision has a good improvement effect on the comprehensive perception of dynamic target movement. The above findings are useful for the study of target motion perception in MR environments and provide a theoretical basis for subsequent research on the cognitive mechanism and quantitative of tactile indication in MR environment.
- Book Chapter
- 10.36253/10.36253/979-12-215-0289-3.02
- Jan 1, 2023
Digitalization in the construction industry is increasingly striving to create digital twins in order to continuously exploit optimization potential in the management and utilization of existing buildings. Building Information Modeling (BIM)-based as-is or as-built documentation represents a promising basis in this context, which requires creating a geometric model for example based on point clouds as well as semantic enrichment in a Scan-to-BIM workflow. Conventionally, this is carried out manually by specialists on 2D screens and often is time-consuming and costly. The project "Building Inspector XR" addresses these issues and presents an intuitive solution for BIM-based as-is/as-built documentation using X-Reality (XR). In Virtual Reality (VR), BIM models are created off-site from point clouds and then are verified in Mixed Reality (MR) on-site. By integrating (partially) automated methods and targeting user-friendliness in our solution, Scan-to-BIM can be realized more efficiently and intuitively. In this paper, the focus lies on the innovative aspects of our XR application which encompass VR and MR environments, automation support, modeling schemes in compliance with BIM standards, and the registration of models in reality for MR. Additionally, the paper shows the interconnected toolchain that facilitates an efficient Scan-to-BIM workflow
- Book Chapter
- 10.36253/979-12-215-0289-3.02
- Jan 1, 2023
Digitalization in the construction industry is increasingly striving to create digital twins in order to continuously exploit optimization potential in the management and utilization of existing buildings. Building Information Modeling (BIM)-based as-is or as-built documentation represents a promising basis in this context, which requires creating a geometric model for example based on point clouds as well as semantic enrichment in a Scan-to-BIM workflow. Conventionally, this is carried out manually by specialists on 2D screens and often is time-consuming and costly. The project "Building Inspector XR" addresses these issues and presents an intuitive solution for BIM-based as-is/as-built documentation using X-Reality (XR). In Virtual Reality (VR), BIM models are created off-site from point clouds and then are verified in Mixed Reality (MR) on-site. By integrating (partially) automated methods and targeting user-friendliness in our solution, Scan-to-BIM can be realized more efficiently and intuitively. In this paper, the focus lies on the innovative aspects of our XR application which encompass VR and MR environments, automation support, modeling schemes in compliance with BIM standards, and the registration of models in reality for MR. Additionally, the paper shows the interconnected toolchain that facilitates an efficient Scan-to-BIM workflow
- Research Article
45
- 10.1016/j.cirp.2009.03.020
- Jan 1, 2009
- CIRP Annals
A mixed reality environment for collaborative product design and development
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.