Abstract

Mobile human computer interaction by Wearable Computing is to improve and support humans in their daily tasks [1, 2]. Wearable computers are easily to wear or can be incorporated in the human's clothes. Wearable computers intend compared to smart phones to minimize the cognitive effort and manual computer interaction. Another important feature of this technology is the interaction with its environment by distributed sensors. This is called context awareness providing the user with relevant environmental information like: location, activity, identity, time, temperature, etc.This research project, at its initial phase, has the objective to provide indoor localization in known and unknown environments for pedestrians equipped with wearable computers. This information is useful in many applications.At the moment there are various methods for indoor and outdoor localization such as GPS, pre-installed indoor communication infrastructures, field strength measurements (WLAN, GSM, Bluetooth, etc), laser, radar, sonar, camera, motion sensors, etc. For a precise indoor localization the best strategy is to use laser, camera or motion sensors [3]. I intend to apply Simultaneous Localization and Mapping (SLAM) using a short range laser scanner as known from mobile robots [4]. Fusing the laser scanner data with data of accelerometers, gyroscope and magnetometer will increase the precision.SLAM is well known problem from robotic map building and localization, it is solved, but probably needs some algorithm improvements. The most popular algorithms are based on the Extended Kalman filter and the Rao-Blackwellized Particle Filters to solve the problem [4]. Basically a map is built and estimated and a position estimated using the odometry data of the robot, where distance and direction obtained from a laser scanner with respect to land mark data and position. Landmarks are basically features in an environment that can be used as reference, to make different measurements from different positions. For example in an indoor environment landmarks could be lines, walls, corners, edges or more specific obstacles.The implementation of SLAM for pedestrians based on [5] is one of the objectives; where the pedestrian was equipped with head worn sensors. Pedestrians have a much more complex odometry than mobile robots; they differ in the type of movements and degrees of freedom. The laser scanner position with mobile robots is stable compared to the surface. This cannot be guaranteed for humans. Furthermore the human build is specific for each person as is motion. Thus the challenge is the odometry to be extracted for each human. To solve the odometry extraction, I intend to use inertial measurement units (IMU) as motion sensors to indentify walking and change of direction filtering noise due to irregular movements of the pedestrian.In this dissertation project the pedestrian will be equipped with a short range laser scanner and an inertial measurement unit. The positioning of the sensors is crucial to reduce noise or incorrect measurements. In mobile robots the laser scanner is implemented on top of it and is able to scan a horizontal plane. The most stable positions on the human body are the shoulders and hips to place the sensors. To obtain horizontal laser scans, the raw data requires processing with the IMU data and projection into the horizontal plane. Additionally to reduce false laser scan readings we will regulate the scanner with a servo motor stabilizer so its measurements are always taking horizontally. Mapping will be achieved with occupancy grid mapping. The entire data should be processed using a wearable computer. The pedestrian should not have to interact with it or enter any kind of preexisting knowledge. I intend to achieve a precise pedestrian slam in a real time environment with persons and moving objects for a specific application.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call