Abstract

When disaster strikes in urban areas, the devastating results are collapsed structures that may contain voids, and trapped people within. To a large extent, the speed with which these victims can be found and extricated determines the likelihood of their survival. Specially trained and equipped emergency first responders are tasked with trying to save their lives by locating and extricating trapped victims from these dangerous environments. Telepresence systems can help first responders search for casualties from a safe location. Most automated search systems intended for use in urban disasters, come in the form of remotely operated robots. This work takes a different approach to telepresence and robotics. This work is an extension of previous work that exploits the intelligence and characteristics of trained search dogs combined with compatible technology and used as components in new kinds of telepresence systems for urban search and rescue (USAR) operations. The Canine Remote Deployment System (CRDS) is a tool that emergency responders can use to deliver critical supplies to trapped victims in rubble using dogs. The first contribution of this work is the development of the bark detection system for automatically triggering deployment of packages near trapped victims from the CRDS-guaranteeing accurate package deployment even when remote communication with the dog is impossible. A well-known ground robot problem is the difficulty in designing a mobility mechanism to traverse rubble. Another contribution of this thesis is the Canine Assisted Robot Deployment (CARD) framework and the design of a robot capable of being carried by a search dog. This work extends the responder’s telepresence in rescue operations by bringing robots much deeper into the disaster site than current methods. Visual odometry is used in location tracking in GPS-denied environments and can be used in rescue operations. This research explores the limitation of RGB-D cameras for visual odometry for this application. An algorithm called pseudo-Random Interest Points Extractor was developed iv to track images over visually feature-sparse areas with the potential use of visually reconstructing canine search paths to victims. This work concentrates on using visual odometry from data collected from a search dog-mounted RGB-D camera. The task of model stabilization is difficult due to the nature of dog’s constant and unpredictable movements, asthe data contains many motion blurred images. The development of an algorithm called Intelligent Frame Selector is shown to improve visual odometry for systems carried by search dogs by intelligently filtering data and selecting only usable frames. The algorithm can be applied to any general visual odometry pipeline beneficially as the technique reduces cumulative error problems by using less data.

Highlights

  • 1.1 Introduction“Telepresence” is a set of technologies that allow a person to feel they are present at a remote location and provide them with the ability to affect that environment [1]

  • An important impetus for conducting these tests was to confirm that the equipment does not alter the search characteristics of Urban Search and Rescue (USAR) dogs

  • The tests confirmed that the Canine Remote Deployment System (CRDS) did not hinder the dogs and provides them a new capability— automatically delivering supplies to victims

Read more

Summary

Introduction

“Telepresence” is a set of technologies that allow a person to feel they are present at a remote location and provide them with the ability to affect that environment [1]. This technology is used in a plethora of applications. Even more complex applications include remote robotic surgery where a surgeon can operate on a patient without having to be in the same location by using sophisticated robotics. In these applications, the technology is used in a well known and structured environment where communications are generally assured, and the environment controlled. We present a method for extending the reach of robots through dogs

Objectives
Results
Discussion
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.