Abstract

We live in a digital world that, in 2010, crossed the mark of one zettabyte data. This huge amount of data processed on computers extremely fast with optimized techniques allows one to find insights in new and emerging types of data and content and to answer questions that were previously considered beyond reach. This is the idea of Big Data. Google now offers the Google Correlate analysis public tool that, from a search term or a series of temporal or regional data, provides a list of queries on Google whose frequencies follow patterns that best correlate with the data, according to the Pearson determination coefficient R2. Of course, "correlation does not imply causation." We believe, however, that there is potential for these big data tools to find unexpected correlations that may serve as clues to interesting phenomena, from the pedagogical and even scientific point of view. As far as we know, this is the first proposal for the use of Big Data in Science Teaching, of constructionist character, taking as mediators the computer and the public and free tools such as Google Correlate. It also has an epistemological bias, not being merely a training in computational infrastructure or predictive analytics, but aiming at providing students a better understanding of physical concepts, such as phenomena, observation, measurement, physical laws, theory, and causality. With it, they would be able to become good Big Data specialists, the so needed "data scientists" to solve the challenges of Big Data.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call