Abstract

Multi-modal haptic rendering is an important research direction to improve realism in haptic rendering. It can produce various mechanical stimuli that render multiple perceptions, such as hardness and roughness. This paper proposes a multi-modal haptic rendering method based on a genetic algorithm (GA), which generates force and vibration stimuli of haptic actuators according to the user’s target hardness and roughness. The work utilizes a back propagation (BP) neural network to implement the perception model f that establishes the mapping (I=f(G)) from objective stimuli features G to perception intensities I. We use the perception model to design the fitness function of GA and set physically achievable constraints in fitness calculation. The perception model is transformed into the force/vibration control model by GA. Finally, we conducted realism evaluation experiments between real and virtual samples under single or multi-mode haptic rendering, where subjects scored 0-100. The average score was 70.86 for multi-modal haptic rendering compared with 57.81 for hardness rendering and 50.23 for roughness rendering, which proved that the multi-modal haptic rendering is more realistic than the single mode. Based on the work, our method can be applied to render objects in more perceptual dimensions, not only limited to hardness and roughness. It has significant implications for multi-modal haptic rendering.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call