Abstract

Abstract. SmarterRoutes aims to improve navigational services and make them more dynamic and personalised by data-driven and environmentally-aware road scene complexity estimation. SmarterRoutes divides complexity into two subtypes: perceived and descriptive complexity. In the SmarterRoutes architecture, the overall road scene complexity is indicated by combining and merging parameters from both types of complexity. Descriptive complexity is derived from geospatial data sources, traffic data and sensor analysis. The architecture is currently using OpenStreetMap (OSM) tag analysis, Meten-In-Vlaanderen (MIV) derived traffic info and the Alaro weather model of the Royal Meteorological Institute of Belgium (RMI) as descriptive complexity indicators. For the perceived complexity an image based complexity estimation mechanism is presented. This image based Densenet Convolutional Neural Network (CNN) uses Street View images as input and was pretrained on buildings with Bag-of-Words and Structure-from-motion features. The model calculates an image descriptor allowing comparison of images by calculation of the Euclidean distances between descriptors. SmarterRoutes extends this model by additional hand-labelled rankings of road scene images to predict visual road complexity. The reuse of an existing pretrained model with an additional ranking mechanism produces results corresponding with subjective assessments of end-users. Finally, the global complexity mechanism combines the aforementioned sub-mechanisms and produces a service which should facilitate user-centred context-aware navigation by intelligent data selection and/or omission based on SmarterRoutes’ complexity input.

Highlights

  • As roads get more busy and our living areas more densely Along with the aforementioned assisting technologies an populated, driving has become challenging, especially in optimal route with appropriate, well-timed navigation incognitively demanding circumstances such as complex junc- structions can contribute to the user-centred contexttions or traffic congestion

  • Another contributing factor towards user-centred contextadaptive navigation is the inclusion of user preferences and customisation. (Michon and Denis, 2001) performed a user study to get an idea of how the users would formulate navigation instructions after they were shown the way from a starting point to a destination in Paris

  • Background knowledge about the type of environment provides us with an idea of the road scene’s complexity. To mimic this natural behaviour the proposed complexity measurement mechanism uses transfer learning on Convolutional Neural Networks (CNNs) with human judgements of visually perceived risk of a road scene as training labels

Read more

Summary

Complexity driven route suggestion

As roads get more busy and our living areas more densely Along with the aforementioned assisting technologies an populated, driving has become challenging, especially in optimal route with appropriate, well-timed navigation incognitively demanding circumstances such as complex junc- structions can contribute to the user-centred contexttions or traffic congestion. In those highly critical situa- aware driving experience. (Sladewski et al, 2017) implemented a route planning tion, parking aid and emergency calls are just a handful layout based on weights originating from ranking the road of the numerous efforts that have been made to improve turns and their accompanied complexity

User driving preferences
What is complexity?
Level-of-detail driven data management
Scoring an environment’s complexity
The influence of user characteristics on complexity
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.