In this study, an autonomous vehicle that can avoid obstacles has been developed by using stereo imaging systems and artificial intelligence applications together. An integrated stereo camera module and NVIDIA Jetson Nano developer kit were used as computer vision system. Checkerboard calibration was performed to prevent camera distortions. The images of the cameras were rectified and the difference costs between the left and right image pairs on the same epipolar plane were calculated. These difference costs were passed through the weighted least squares (WLS) filter, thus a depth map of the left camera image was created. The rectified left camera view was also processed by artificial intelligence-based semantic segmentation. Segmentation was carried out using a previously trained artificial intelligence network (SegNet). These semantic segmentation outputs were passed through the HSV color mask and a mask image was hereby obtained. Using the mask image; movable ground, obstacle, and background information was extracted. Useful data analysis was performed on the depth map and semantic segmentation outputs of the same frame. This information is transmitted to the 2-wheeled vehicle which is designed based on ROS that provides the movement, and decisions are made within the scope of the avoidance algorithm. This study’s novel contribution involves the integration of a passive depth sensing system and artificial intelligence based semantic segmentation, in tandem with a real-time obstacle avoidance algorithm that utilizes these combined technologies. Consequently, the autonomous vehicle is capable of making semantic inferences about its environment while effectively avoiding obstacles.
Read full abstract