Abstract
Human poses and the behaviour estimation for different activities in (virtual reality/augmented reality) VR/AR could have numerous beneficial applications. Human fall monitoring is especially important for elderly people and for non-typical activities with VR/AR applications. There are a lot of different approaches to improving the fidelity of fall monitoring systems through the use of novel sensors and deep learning architectures; however, there is still a lack of detail and diverse datasets for training deep learning fall detectors using monocular images. The issues with synthetic data generation based on digital human simulation were implemented and examined using the Unreal Engine. The proposed pipeline provides automatic “playback” of various scenarios for digital human behaviour simulation, and the result of a proposed modular pipeline for synthetic data generation of digital human interaction with the 3D environments is demonstrated in this paper. We used the generated synthetic data to train the Mask R-CNN-based segmentation of the falling person interaction area. It is shown that, by training the model with simulation data, it is possible to recognize a falling person with an accuracy of 97.6% and classify the type of person’s interaction impact. The proposed approach also allows for covering a variety of scenarios that can have a positive effect at a deep learning training stage in other human action estimation tasks in an VR/AR environment.
Highlights
With the rapid progress of deep learning models gathering the necessary amount of training, data is a challenging task [1]
There are large-scale urban datasets, including modelled natural areas and landscapes [14,15], and it is shown that such datasets have a good effect on convolutional neural network (CNN) training
A physical model of a person is placed in a 3D environment in which the human model interacts with a 3D interior to obtain simulation data (Figure 2a)
Summary
With the rapid progress of deep learning models gathering the necessary amount of training, data is a challenging task [1]. Synthetic data is used in the neural networks training process to reduce the costs of collecting big diversity of the dataset and solving domain-adaptation problems in visual tasks [2]. In this case, developing and improving three-dimensional modelling and rendering software aims to achieve synthetic data modelling for solving non-standard problems in network training. A physical model of a person is placed in a 3D environment in which the human model interacts with a 3D interior to obtain simulation data (Figure 2a). A physical model of a person is placed in a 3D environment in which the huma model interacts with a 3D interior to obtain simulation data (Figure 2a). In the expeIrnimtheenet,xwperuimseednat, 3wDeruosoemd oaf3fiDxerdoosmizeos.f Tfihxedwseiizgehst. aTnhde lwenegigthhtwaenrde 8lemngth were and the hemighatnids 3thme.hOeinghthteisw3amlls. wOenrethreanwdaolmlslwy eprleacreadndoobmjecltyspolfaicnetderoiobrjepcatsinotifnignste, rainodr painting on the flooarnwdeorne rtuhgesf.lToohre wreenrdeeruinggs.mTahteerrieanl dtyeprienfgormeaatcehritaylpteypoef efnorvieraocnhmteynptemoofdeenlvironme (floor, wallms,oedtce.l)(wfloaosra,swsiaglnlse,de.tcM.)owreaosvaesrs,itghneefdo.llMoworienogvvear,ritahbeilfiotylloowf pinagravmareitaebrsiliwtyitohfinparamete the materiwaliwtheinretihmepmleamteerniatlewd:etreexitmurpelesmcaelen,tteedx:ttuerxetubrleensdcianleg, cteoxlotuurr,enbolermndailncgoecoffilocuiern, tnormal c and roughenfefsicsi.ent and roughness.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.