Performance capture of human beings have been used to animate 3D characters for movies and games for several decades now. Traditional performance capture methods require a dedicated costly setup which usually consists of more than one sensor placed at a distance from the subject, hence requiring a large amount of budget and space to accomodate. This lowers its feasibility and portability by a huge amount. Egocentric (first-person/wearable) cameras, however, are attached to the body and hence are mobile. With a rise of acceptance of wearable technology by the general public, wearable cameras have gotten cheaper too. We can make use of their excessive portability in the performance capture domain. However working with egocentric images is a mammoth task as the views are severely distorted due to the first-person perspective and the body parts farther from the camera are highly prone to being occluded. In this paper, we review the existing state-of-the-art methods about performance capture using egocentric based views.
Read full abstract