Abstract

BackgroundProsthetic hand users have to rely extensively on visual feedback, which seems to lead to a high conscious burden for the users, in order to manipulate their prosthetic devices. Indirect methods (electro-cutaneous, vibrotactile, auditory cues) have been used to convey information from the artificial limb to the amputee, but the usability and advantages of these feedback methods were explored mainly by looking at the performance results, not taking into account measurements of the user’s mental effort, attention, and emotions. The main objective of this study was to explore the feasibility of using psycho-physiological measurements to assess cognitive effort when manipulating a robot hand with and without the usage of a sensory substitution system based on auditory feedback, and how these psycho-physiological recordings relate to temporal and grasping performance in a static setting.Methods10 male subjects (26+/-years old), participated in this study and were asked to come for 2 consecutive days. On the first day the experiment objective, tasks, and experiment setting was explained. Then, they completed a 30 minutes guided training. On the second day each subject was tested in 3 different modalities: Auditory Feedback only control (AF), Visual Feedback only control (VF), and Audiovisual Feedback control (AVF). For each modality they were asked to perform 10 trials. At the end of each test, the subject had to answer the NASA TLX questionnaire. Also, during the test the subject’s EEG, ECG, electro-dermal activity (EDA), and respiration rate were measured.ResultsThe results show that a higher mental effort is needed when the subjects rely only on their vision, and that this effort seems to be reduced when auditory feedback is added to the human-machine interaction (multimodal feedback). Furthermore, better temporal performance and better grasping performance was obtained in the audiovisual modality.ConclusionsThe performance improvements when using auditory cues, along with vision (multimodal feedback), can be attributed to a reduced attentional demand during the task, which can be attributed to a visual “pop-out” or enhance effect. Also, the NASA TLX, the EEG’s Alpha and Beta band, and the Heart Rate could be used to further evaluate sensory feedback systems in prosthetic applications.

Highlights

  • Prosthetic hand users have to rely extensively on visual feedback, which seems to lead to a high conscious burden for the users, in order to manipulate their prosthetic devices

  • A significant effect was found between modalities, F(1.4, 105)=4.947 p

  • Post Hoc test showed no significant difference between the AF and AVF (p=0.478), but there was a significant difference between the modalities AF and VF (p

Read more

Summary

Introduction

Prosthetic hand users have to rely extensively on visual feedback, which seems to lead to a high conscious burden for the users, in order to manipulate their prosthetic devices. It is well known that upper limb amputees have to rely extensively on visual feedback in order to monitor and manipulate successfully their prosthetic device This situation seems to lead to a high conscious burden for the users, which generates fatigue and frustration [1,2]. This lack of sensory feedback is a major drawback that many the motor-sensory areas of the lost limb in a subject’s brain were activated when the subject grabbed an object with a prosthetic hand while looking at the action and feeling the electrical stimulation This result gives good insight of the brain plasticity, but still the authors didn’t address the mental load due to the multimodal information display. It is reasonable to question how does the presentation of multimodal information affect the robot hand user? Is the mental workload increasing or decreasing?

Objectives
Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call