The last decades saw a great innovation in computer vision. Recently, the field has been fundamental in the development of autonomous navigation systems. Modern assistive technologies, like smart wheelchairs, could employ autonomous navigation to assist users during operation. A prerequisite for such systems is to recognise the navigable space in real-time. The current research features an off-the-shelf powered wheelchair customised into an intelligent robot, which perceives the environment using Point Cloud Semantic Segmentation (PCSS). The implemented algorithm is used to distinguish between two conditions, traversable and non-traversable space, in real-time, using the aforementioned conditions as the two labelled classes. The accuracy of traversable space detection resulted as 99.64% while the accuracy of non-traversable space detection was 91.79%. The performance of the suggested method was invariant to changes in wheelchair velocity indicating that the latency of the suggested algorithm is within the tolerable limits for real-time operation.