Abstract

The growing use of crowdsourced geographic information (CGI) has prompted the employment of several methods for assessing information quality, which are aimed at addressing concerns on the lack of quality of the information provided by non‐experts. In this work, we propose a taxonomy of methods for assessing the quality of CGI when no reference data are available, which is likely to be the most common situation in practice. Our taxonomy includes 11 quality assessment methods that were identified by means of a systematic literature review. These methods are described in detail, including their main characteristics and limitations. This taxonomy not only provides a systematic and comprehensive account of the existing set of methods for CGI quality assessment, but also enables researchers working on the quality of CGI in various sources (e.g., social media, crowd sensing, collaborative mapping) to learn from each other, thus opening up avenues for future work that combines and extends existing methods into new application areas and domains.

Highlights

  • The use of crowdsourced geographic information (CGI) has grown in the past few years, owing to a number of key features

  • We propose a taxonomy of methods for assessing the quality of CGI when no reference data are available, which is likely to be the most common situation in practice

  • This term has been used as a broader replacement for volunteered geographic information (VGI) (Goodchild, 2007), since the term “volunteered” does not seem appropriate to refer to information that is collected without the will or conscious knowledge of the provider (Harvey, 2013)

Read more

Summary

| INTRODUCTION

The use of crowdsourced geographic information (CGI) has grown in the past few years, owing to a number of key features (e.g., it is free, up-to-date, and provided by several volunteers). Several researchers have investigated approaches to assess the quality of CGI, so that there is currently a large number of methods to accomplish this task (e.g., Foody et al, 2013; Girres & Touya, 2010; Senaratne, Bro€ring, & Schreck, 2013) These methods differ with regard to the type of information evaluated, and reference data types, among other factors. In conflation tasks, volunteers analyze and interpret geographic information from multiple sources, conflating them to find matching features/objects and produce new geographic information (e.g., detecting changes in geographic objects) (Anhorn, Herfort, & Albuquerque, 2016) These types of activity are referred to in various ways in the literature, and it is not our intention here to be exhaustive (for a discussion on this, see See et al, 2016). This involves first checking the quality of CGI and, later, filtering it

| RELATED WORKS
| METHODOLOGY
Objective ending condition
Method
| DISCUSSION
| CONCLUSIONS
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call