Around 2006, the inception of Linked Data [2] has led to a realignment of the Semantic Web vision and the realization that data is not merely a way to evaluate our theoretical considerations, but a key research enabler in its own right that inspires novel theoretical and foundational research questions. Since then, Linked Data is growing rapidly and is altering research, governments, and industry. Simply put, Linked Data takes the World Wide Web’s ideas of global identifiers and links and applies them to (raw) data, not just documents. Moreover, and regularly highlighted by Tim Berners-Lee, Anybody can say Anything about Any topic (AAA)1 [1], which leads to a multi-thematic, multi-perspective, and multi-medial global data graph. More recently, Big Data has made its appearance in the shared mindset of researchers, practitioners, and funding agencies, driven by the awareness that concerted efforts are needed to address 21st century data collection, analysis, management, ownership, and privacy issues. While there is no generally agreed understanding of what exactly is (or more importantly, what is not) Big Data, an increasing number of V’s has been used to characterize different dimensions and challenges of Big Data: volume, velocity, variety, value, and veracity. Interestingly, different (scientific) disciplines highlight certain dimensions and neglect others. For instance, super computing seems to be mostly interested in the volume dimension while researchers working on sensor webs and the internet of things seem to push on the velocity front. The social sciences and humanities, in contrast, are more interested in value and veracity. As argued before [13,17], the variety dimensions seems to be the most intriguing one for the Semantic Web and the one where we can contribute-
Read full abstract