The effective and accurate modeling of human performance is one of the key technologies in virtual/smart manufacturing systems. However, a significant challenge lies in acquiring sufficient data for such modeling. Virtual Reality (VR) emerges as a promising solution, making human manufacturing experiments more practical and accessible. In this paper, we present a novel framework that efficiently models human assembly duration by leveraging VR to prototype data-acquisition systems for assembly tasks. Central to the framework is an active learning model, which intelligently selects experimental conditions to yield the most informative results, effectively reducing the number of experiments required. As a result, the system demands fewer experimental trials and operates on an automated basis. In VR experiments involving throughput rate, the active model significantly reduces the data requirement, thereby expediting the experiment and modeling process. While this framework demonstrates remarkable efficiency, it does exhibit sensitivity to non-constant noise and may necessitate prior data from similar assembly tasks to identify high-noise. Notably, this proposed method extends beyond manufacturing, allowing the quick generation of human performance models in virtual systems and enhancing experiment scalability across various fields. With its potential to revolutionize human performance modeling, our framework represents a promising avenue for advancing virtual/smart manufacturing systems and other related applications.
Read full abstract