Abstract
The area of mixed reality has had rapid growth in recent years, with a notable rise in funding. This may be attributed to the rising recognition of the potential advantages associated with the integration of virtual information into the physical environment. The majority of contemporary mixed reality apps that rely on markers use algorithms for local feature identification and tracking. This study aims to enhance the accuracy of object recognition in complicated environment and enable real-time classification operations via the introduction of a unique detection approach known as the lightweight and efficient YOLOv4 model. In the present setting, Computational vision emerges as a very valuable and engaging manifestation of artificial intelligence (AI) that finds widespread application in many aspects of daily existence. The field of computer vision is dedicated to the development of advanced artificial intelligence and computer systems that aim to replace complex elements of the human environment. In recent times, deep neural networks have emerged as a crucial component in several sectors owing to their well-established capacity to process visual input. This study presents a methodology for classifying and identifying objects using the YOLOv4 object detection algorithm. Convolutional neural networks (CNNs) have shown exceptional efficacy in the tasks of object tracking and feature extraction from pictures. Therefore, the enhanced network architecture optimizes both the precision of identification and the speed at which it operates. This research will contribute to developing mixed-reality simulations system for object detection and tracking in collaborative environment that are accessible to everyone, including users in the architectural filed. The model was evaluated in comparison to other object detection approaches. Based on the empirical results, it was observed that the YOLOv4 model exhibited a mean average precision (mAP) of 0.988, surpassing the performance of both YOLOv3 and other object identification models.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: International Journal on Recent and Innovation Trends in Computing and Communication
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.