Abstract

Digital Human Models (DHM) are used in digital mockups to identify human factor problems in design and assembly. Present applications are mostly limited to the posture, biomechanics, reach and simple visibility analysis and they are mostly developed for independent simulations that are mostly hand crafted by designers. Our aim is to develop natural simulations using vision as a feedback agent for performing any postural simulations similar to humans. In this paper, the work presented is limited to demonstrating active vision based feedback for a typical hand reach task without using inverse kinematics. The proposed concept takes into account of previously developed vision and hand modules and describes an integration methodology such that both modules can work in tandem providing feedback and feed forward mechanisms. The scheme primarily utilizes vision module that acts similar to human eyes by providing spatial information about hand and object in workspace. Similar to retinal projection, the workspace object and the model of DHM hand is geometrically projected over the grid and the relative positions are computed in terms of grid-cells. The computed relative positions are used to compute a vector direction that is provided as a feedback to hand for guiding it towards object. The hand module independently is capable of natural grasping and visual feedback is used for motion guidance. The implementation shown in this paper is limited to monocular vision and two-dimensional hand movement as a proof of concept. This scheme is used to demonstrate a scenario where DHM is successfully able guide the hand and point it to a given object. The presented model finally shows vision as a guiding agent for hand reach simulations. It can be used for planning and placement of workspace objects to enhance human task performance.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call