Abstract

The automatic creation of three-dimensional prototypes and digital copies of three-dimensional objects of the real world is a revolutionary innovation that is actively used today in many areas of human activity, for example, for identification in smartphones and e-commerce applications, as well as in visualization and design systems. This trend has intensified now that additive technologies have become available to a wide range of users, and large-scale storage of three-dimensional objects are becoming increasingly popular and widespread. One of the tasks that a person solves every day on an unconscious level is the cognition of images: visual, sound intelligible, tactile and others. Thanks to image recognition, it is possible to identify people by external signs and distinguish them from each other, identify sounds, classify different objects by similar properties, and also accurately determine the subjective characteristics of the observed objects, such as color, shape, volume and depth. The problem of recognizing images of objects of the surrounding world and understanding their scale and volume by two-dimensional projections is one of the most urgent and studied problems solved by computer vision methods. However, this class of tasks is quite difficult to formalize, which makes their solution time-consuming to develop and implement. The article describes the development of a software package that re-constructs three-dimensional scenes according to their projections using neural network machine learning methods: the basics of three-dimensional reconstruction are considered, a model of the general architecture of the PAK is proposed, the architecture is introduced the developed neural network, the results of training and test experiments.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.