Abstract

Tactile paving plays a crucial role in the travel of visually impaired, and assists them to find the way forward. Therefore, it is quite meaningful to identify the regions and trends of tactile paving to support the independent walking of the visually impaired. Visual segmentation technology shows potential to segment the regions of tactile paving, and the shapes of these regions can be used to further check the road trends. To effectively improve the accuracy and robustness of tactile paving segmentation, a novel tactile paving segmentation method that combines UNet network and multi-scale feature extraction is proposed in this work. The structure of group receptive field block (GRFB) has been embedded into the basic UNet network to obtain multi-scale receptive fields of the tactile paving. Aiming to enhance the computational efficiency, the strategy of group convolution is adopted to combine with GRFB module. Meanwhile, small-scale convolution is used after each group convolution to achieve cross-channel information interaction and integration, aiming to extract more abundant high-level features. In this paper, we have constructed the dataset of tactile paving in various scenarios, and labeled them for experimental evaluation. Furthermore, a comparative analysis with the typical networks and structure modules has been demonstrated in details. The experimental results show that the proposed network achieves the best overall performance among those compared networks on tactile paving segmentation, and provides a valuable reference for the segmentation of tactile paving.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call