Abstract

Abstract This work investigates whether estimates of ease of part handling and part insertion can be provided by multimodal simulation using virtual environment (VE) technology, rather than by using conventional table-based methods such as Boothroyd and Dewhurst Charts. To do this, a unified physically based model has been developed for modeling dynamic interactions among virtual objects and haptic interactions between the human designer and the virtual objects. This model is augmented with auditory events in a multimodal VE system called the “Virtual Environment for Design for Assembly” (VEDA). Currently these models are 2D in order to preserve interactive update rates, but we expect that these results will be generalizable to 3d models. VEDA has been used to evaluate the feasibility and advantages of using multimodal virtual environments as a design tool for manual assembly. The designer sees a visual representation of the objects and can interactively sense and manipulate virtual objects through haptic interface devices with force feedback. He/She can feel these objects and hear sounds when there are collisions among the objects. Objects can be interactively grasped and assembled with other parts of the assembly to prototype new designs and perform Design for Assembly analysis. Experiments have been conducted with human subjects to investigate whether Multimodal Virtual Environments are able to replicate experiments linking increases in assembly time with increase in task difficulty. In particular, the effect of clearance, friction, chamfers and distance of travel on handling and insertion time have been compared in real and virtual environments for peg-in-hole assembly task. In addition, the effects of degrading/removing the different modes (visual, auditory and haptic) on different phases of manual assembly have been examined.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call