Abstract
Some blind humans have demonstrated the ability to detect and classify objects with echolocation using palatal clicks. An audible sonar robot mimics human click emissions, binaural hearing, and movement to extract target echo envelopes to illustrate the capabilities and limitations of target mapping. Targets of various complexity are examined by transverse displacements and rotations of the sonar and target to model movements performed by the blind. Controlled sonar movements executed by a robot platform model proprioception information available to blind humans for examining targets from various aspects. The audible sonar uses this sonar location and orientation information to form target maps that are similar to diagnostic ultrasound brightness-mode images that map internal organs. Monaural images produced by each ear are processed to form binaural product and difference images. Simple targets, such as cylindrical and square posts, produce single echoes that form distinguishable and recognizable images. A two-post target configured to produce multiple echoes forms an image that displays interference effects and must be interpreted. Superimposed B-scan images from the four sides of a target form a complete map that contains sufficient information to differentiate the targets.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.