Abstract

BackgroundThe next generation of prosthetic limbs will restore sensory feedback to the nervous system by mimicking how skin mechanoreceptors, innervated by afferents, produce trains of action potentials in response to compressive stimuli. Prior work has addressed building sensors within skin substitutes for robotics, modeling skin mechanics and neural dynamics of mechanotransduction, and predicting response timing of action potentials for vibration. The effort here is unique because it accounts for skin elasticity by measuring force within simulated skin, utilizes few free model parameters for parsimony, and separates parameter fitting and model validation. Additionally, the ramp-and-hold, sustained stimuli used in this work capture the essential features of the everyday task of contacting and holding an object.MethodsThis systems integration effort computationally replicates the neural firing behavior for a slowly adapting type I (SAI) afferent in its temporally varying response to both intensity and rate of indentation force by combining a physical force sensor, housed in a skin-like substrate, with a mathematical model of neuronal spiking, the leaky integrate-and-fire. Comparison experiments were then conducted using ramp-and-hold stimuli on both the spiking-sensor model and mouse SAI afferents. The model parameters were iteratively fit against recorded SAI interspike intervals (ISI) before validating the model to assess its performance.ResultsModel-predicted spike firing compares favorably with that observed for single SAI afferents. As indentation magnitude increases (1.2, 1.3, to 1.4 mm), mean ISI decreases from 98.81 ± 24.73, 54.52 ± 6.94, to 41.11 ± 6.11 ms. Moreover, as rate of ramp-up increases, ISI during ramp-up decreases from 21.85 ± 5.33, 19.98 ± 3.10, to 15.42 ± 2.41 ms. Considering first spikes, the predicted latencies exhibited a decreasing trend as stimulus rate increased, as is observed in afferent recordings. Finally, the SAI afferent’s characteristic response of producing irregular ISIs is shown to be controllable via manipulating the output filtering from the sensor or adding stochastic noise.ConclusionsThis integrated engineering approach extends prior works focused upon neural dynamics and vibration. Future efforts will perfect measures of performance, such as first spike latency and irregular ISIs, and link the generation of characteristic features within trains of action potentials with current pulse waveforms that stimulate single action potentials at the peripheral afferent.

Highlights

  • Our sense of touch helps us perform activities of daily living, such as grasping a glass, discerning the structure of a coin, and buttoning a shirt

  • We demonstrated that response surface methodology (RSM) is sensitive to starting conditions by the different paths taken by the two fitting sessions

  • The first session ended with a parameter set that produced a worse fit (FSS = 0.829 with parameter values 4.33E-08 Model parameters ß (mA), 5.74E-07 mA/N, 1.01E-03 mAÁ s/N, 71.592 ms, 1.01E-06 mF, 50.723 mV) than that produced by the second session’s set of parameters (FSS = 0.936 with parameter values 2.72E-08 mA, 6.20E-07 mA/N, 2.71E04 mAÁ s/N, 71.409 ms, 9.70E-07 mF, 47.300 mV)

Read more

Summary

Introduction

Our sense of touch helps us perform activities of daily living, such as grasping a glass, discerning the structure of a coin, and buttoning a shirt. Completing these tasks proves difficult for the 541,000 U.S citizens living with upper limb loss [1]. Work is focused upon the latter, in particular, those essential features captured by the slowly adapting type I (SAI) afferent in its response to the everyday task of contacting and holding an object. The generation of prosthetic limbs will restore sensory feedback to the nervous system by mimicking how skin mechanoreceptors, innervated by afferents, produce trains of action potentials in response to compressive stimuli. The ramp-and-hold, sustained stimuli used in this work capture the essential features of the everyday task of contacting and holding an object

Methods
Results
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call