Abstract

Detection of drivable road area and other critical objects like obstacles and landmarks in traffic scenes is fundamental to advanced driver assistance systems (ADAS) and self-driving car. Although scene parsing is able to make segmentation of road area from other objects and background, it basically does not get involved in recognizing other on-road markings. In fact, detection and classification of road area are only one small step towards true autonomous driving, because there are many categories of informative markings embodied within road area, such as lane markings, arrows, guiding lines, pedestrian crosswalks, and no-vehicle signs. If system identifies those markings, more information for both ADAS and self-driving car system can be provided. For this purpose, we release a benchmark dataset named TRoM (Tsinghua Road Marking), which is served for detection of 19 road-marking categories in urban scenarios. TRoM was built by means of over one-month data covering a full spectrum of time, weather, and traffic-load. An annotation toolkit was also presented to facilitate enriching such dataset. By directly applying our state-of-the-art method called RPP (ResNet with Pyramid Pooling), a reasonably accurate baseline on TRoM benchmark is made public for further performance comparison and evaluation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call