PurposeEstimating the pose – position and orientation – of a moving object such as a robot is a necessary task for many applications, e.g., robot navigation control, environment mapping, and medical applications such as robotic surgery. The purpose of this paper is to introduce a novel method to fuse the information from several available sensors in order to improve the estimated pose from any individual sensor and calculate a more accurate pose for the moving platform.Design/methodology/approachPose estimation is usually done by collecting the data obtained from several sensors mounted on the object/platform and fusing the acquired information. Assuming that the robot is moving in a three-dimensional (3D) world, its location is completely defined by six degrees of freedom (6DOF): three angles and three position coordinates. Some 3D sensors, such as IMUs and cameras, have been widely used for 3D localization. Yet, there are other sensors, like 2D Light Detection And Ranging (LiDAR), which can give a very precise estimation in a 2D plane but they are not employed for 3D estimation since the sensor is unable to obtain the full 6DOF. However, in some applications there is a considerable amount of time in which the robot is almost moving on a plane during the time interval between two sensor readings; e.g., a ground vehicle moving on a flat surface or a drone flying at an almost constant altitude to collect visual data. In this paper a novel method using a “fuzzy inference system” is proposed that employs a 2D LiDAR in a 3D localization algorithm in order to improve the pose estimation accuracy.FindingsThe method determines the trajectory of the robot and the sensor reliability between two readings and based on this information defines the weight of the 2D sensor in the final fused pose by adjusting “extended Kalman filter” parameters. Simulation and real world experiments show that the pose estimation error can be significantly decreased using the proposed method.Originality/valueTo the best of the authors’ knowledge this is the first time that a 2D LiDAR has been employed to improve the 3D pose estimation in an unknown environment without any previous knowledge. Simulation and real world experiments show that the pose estimation error can be significantly decreased using the proposed method.