Abstract

In "human teleoperation" (HT), mixed reality (MR) and haptics are used to tightly couple an expert leader to a human follower [1]. To determine the feasibility of HT for teleultrasound, we quantify the ability of humans to track a position and/or force trajectory via MR cues. The human response time, precision, frequency response, and step response were characterized, and several rendering methods were compared. Volunteers (n=11) performed a series of tasks as the follower in our HT system. The tasks involved tracking pre-recorded series of motions and forces while pose and force were recorded. The volunteers then performed frequency response tests and filled out a questionnaire. Following force and pose simultaneously was more difficult but did not lead to significant performance degradation versus following one at a time. On average, subjects tracked positions, orientations, and forces with RMS tracking errors of [Formula: see text] mm, [Formula: see text], [Formula: see text] N, steady-state errors of [Formula: see text] mm, [Formula: see text] N, and lags of [Formula: see text] ms, respectively. Performance decreased with input frequency, depending on the input amplitude. Teleoperating a person through MR is a novel concept with many possible applications. However, it is unknown what performance is achievable and which approaches work best. This paper thus characterizes human tracking ability in MR HT for teleultrasound, which is important for designing future tightly coupled guidance and training systems using MR.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call