The Playfair enigma

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

The invention of statistical graphics is generally, if inaccurately, attributed to William Playfair His initial innovation, along with his subsequent invention of most of the major repertoire of statistical graphics, is in many ways an enigma of the history of science: (1) Given their apparent obviousness, why had these graphic forms not been previously used for plotting statistics? {2} Why was the Cartesian coordinate system, during a century ami a half from its invention, not regularly applied to the kinds of data which Playfair plotted? (3) Why were the symbolic schematics used by Playfair apparently understood by contemporaries without need for prior learning of his 'conventions'? (4) Why did serious scholarly attention to Playfair'$ innovations occur earlier on the continent than in England? (5) Why subsequently have there been waves of popularity and of neglect of Playfair's forms? (S) Why were statistical graphics invented by a political pamphleteer and business adventurer rather than a scholar or scientist? (7) Why did statistical graphics develop first for social data applications rather than for natural or physical science purposes? Addressing these questions may shed light on developments in schematic representation of statistics from the beginnings of cultural numeracy to the present day The primary explanations of the enigma are: (1) the similarities and differences between the purely empirical data graph and diagrammatic representations of pure or applied mathematical functions; (2) the association of utility of pure data graphs with a statistical orientation toward phenomena, Playfaiťs innovations were facilitated by bis association with science during a time when science was particularly hospitable to highly pragmatic endeavors. His innovations were also facilitated by bis marginality with regard to the science of bis contemporaries.

Similar Papers
  • Conference Article
  • Cite Count Icon 4
  • 10.1109/iccme.2011.5876712
The impact of different forms of statistical information on reading efficiency, effect, and mental workload: An eye-tracking study
  • May 1, 2011
  • Ning Zhong + 3 more

The statistical information, in general, can be visually presented in the three basic forms: statistical information in text (text for short), statistical graph, and statistical graph with text (i.e. the combination of graph and text). The impact of the different forms on people's understanding of the statistical information is still unclear. To address this issue, this study using eye-tracking investigated the reading efficiency (reading time), reading effect (accuracy) and mental workload (pupil diameter) during 36 subjects reading the statistical information presented in any of the three forms. The results showed that: (1) the reading time: the statistical graph was significantly shorter than the text and statistical graph with text, which suggests that the reading efficiency of statistical graph is higher than other forms; (2) the accuracy: there were no significant differences among the three forms, which suggests that there are no significant difference on the reading effect; (3) the pupil diameter: both statistical graph and statistical graph with text were significantly smaller than the text, which reveals that the statistical graph and statistical graph with text in mental workload are significantly lower than the text. The findings also provide the evidences for the reason why people prefer the statistical information presented in the form of the statistical graph with text.

  • Book Chapter
  • Cite Count Icon 2
  • 10.1007/978-3-319-68935-7_48
A Pay as You Use Resource Security Provision Approach Based on Data Graph, Information Graph and Knowledge Graph
  • Jan 1, 2017
  • Lixu Shao + 4 more

With the development of data mining technology, lack of private resource protection has become a serious challenge. We propose to clarify the expression of Knowledge Graph in three layers including Data Graph, Information Graph and Knowledge Graph and illustrate the representation of Data Graph, Information Graph and Knowledge Graph respectively. We elaborate a pay as you use resource security provision approach based on Data Graph, Information Graph and Knowledge Graph in order to ensure that resources will not be used, tampered with, lost and destroyed in unauthorized situations.

  • Book Chapter
  • Cite Count Icon 1
  • 10.1007/978-3-319-58628-1_13
The Analysis and Prediction of Eye Gaze When Viewing Statistical Graphs
  • Jan 1, 2017
  • Andre Harrison + 8 more

Statistical graphs are images that display quantitative information in a visual format that allows for the easy and consistent interpretation of the information. Often, statistical graphs are in the form of line graphs or bar graphs. In fields, such as cybersecurity, sets of statistical graphs are used to present complex information; however, the interpretation of these more complex graphs is often not obvious. Unless the viewer has been trained to understand each graph used, the interpretation of the data may be limited or incomplete [1]. In order to study the perception of statistical graphs, we tracked users’ eyes while studying simple statistical graphs. Participants studied a graph, and later viewed a graph purporting to be a subset of the data. They were asked to look for a substantive change in the meaning of the second graph compared to the first.

  • Book Chapter
  • Cite Count Icon 2
  • 10.1007/978-3-540-33037-0_29
Web-Based Statistical Graphics using XML Technologies
  • Jan 1, 2008
  • Yoshiro Yamamoto + 2 more

Most statistical graphics on theWeb are static, noninteractive and undynamic, even though other statistical analysis systems usually provide various interactive statistical graphics. Interactive and dynamic graphics, see Symanzik (2004), can be implemented using Internet technologies such as Java or Flash (Adobe, 2007). Scalable Vector Graphics (SVG) and Extensible 3D (X3D) offer alternative means of realizing an XML-based graphics format. One advantage of using XML is that data from a wide range of research topics are easy to deal with, because they are all presented in the XML format. Another advantage is that XML is a text-based graphics format, i.e., it is scriptable, meaning that it can be generated dynamically by a statistical analysis system or web application. Before introducing XML-based graphics, we introduce the relationship between theWeb, XML, and statistical graphics.

  • Research Article
  • 10.23939/sisn2024.15.341
Distributed Data Analysis in Cloud Services for Insurance Companies
  • Jul 15, 2024
  • Vìsnik Nacìonalʹnogo unìversitetu "Lʹvìvsʹka polìtehnìka". Serìâ Ìnformacìjnì sistemi ta merežì
  • Oleksandr Lutsenko + 1 more

This article embarks on an insightful journey through the realm of advanced data analysis techniques which can be used in the insurance area, with a keen focus on the applications and capabilities of Graph Neural Networks (GNN) in the following sector. The article is structured into several chapters, which include the overview of existing and commonly used approaches of the data representation, the possible ways of data analysis of the data in such a representation, deep dive into the concept of GNN for the graph data analysis and the applicability of each approach in the insurance industry. The initial chapter introduces the two main concepts of the data representation, which are the commonly used relational database and the more modern approach of dimensional data design. Then the focus is moved to the graph data representation, which also can be used for data analysis in the cloud environment. To achieve the best applicability in the insurance industry, particularly in underwriting and claims management, the article analyzes the advantages of each approach to the data representation as well as its drawbacks. To conclude the chapter, the comparison table of the three approaches is presented. Based on the comparison table, the decision to use the graph representation is made as it enables the industry to unravel complex relationships and dependencies amid various data points—such as policyholder history, incident particulars, and third-party information—resulting in more accurate risk assessments and efficient claim resolutions. Then the article presents the concept of Graph Neural Networks, a rather new concept which can be used to analyze the data, represented in a graph form using machine learning algorithms. The potential of using this approach for the data analysis in the insurance area and some possible use cases are described. The advantages of using this approach include ability to effectively capture and leverage the complex relationships inherent in graph- structured data and a powerful framework for analyzing and processing graph-structured data. However, the potential drawbacks of the approach such as complexity to design and difficulties in scaling are also considered. Further along, the article probes the strategic integration of Graph Neural Networks with real-time and dynamic data environments, examining their adaptability to evolving network patterns and temporal dependencies. We discuss how this adaptability is paramount in contexts like real-time decision-making and predictive analysis, which are crucial for staying agile in a rapidly changing market landscape. Then the exact use cases of the GNN applicability in the insurance area are provided, including the claim assignment and underwriting process are described in detail. Furthermore, the simplified mathematical formulation of the underwriting process is provided, which elaborates the role GNNs play in propelling actuarial science with their capability to incorporate node attributes, edge information, and graph structure into a composite risk assessment algorithm. The article concludes by describing that with the new technologies, the graph representation may become the new standard for the data analysis in the cloud environment, especially for the insurance area, stressing the pivotal role of GNNs in navigating the complexities of interconnected, dynamic data and advocating for their continued research and development to unlock even greater potentials across various sectors.

  • Book Chapter
  • 10.4018/978-1-61350-053-8.ch007
Labelling-Scheme-Based Subgraph Query Processing on Graph Data
  • Jan 1, 2011
  • Hongzhi Wang + 2 more

When data are modeled as graphs, many research issues arise. In particular, there are many new challenges in query processing on graph data. This chapter studies the problem of structural queries on graph data. A hash-based structural join algorithm, HGJoin, is first proposed to handle reachability queries on graph data. Then, it is extended to the algorithms to process structural queries in form of bipartite graphs. Finally, based on these algorithms, a strategy to process subgraph queries in form of general DAGs is proposed. It is notable that all the algorithms above can be slightly modified to process structural queries in form of general graphs.

  • Book Chapter
  • 10.1007/978-3-030-00916-8_30
Learning Planning and Recommendation Based on an Adaptive Architecture on Data Graph, Information Graph and Knowledge Graph
  • Jan 1, 2018
  • Lixu Shao + 4 more

With massive learning resources that contain data, information and knowledge on Internet, users are easy to get lost and confused in processing of learning. Automatic processing, automatic synthesis, and automatic analysis of natural language, such as the original representation of the resources of these data, information and knowledge, have become a huge challenge. We propose a three-layer architecture composing Data Graph, Information Graph and Knowledge Graph which can automatically abstract and adjust resources. This architecture recursively supports integration of empirical knowledge and efficient automatic semantic analysis of resource elements through frequency focused profiling on Data Graph and optimal search through abstraction on Information Graph and Knowledge Graph. Our proposed architecture is supported by the 5W (Who/When/Where, What and How) to interface users’ learning needs, learning processes, and learning objectives which can provide users with personalized learning service recommendation.

  • Conference Article
  • Cite Count Icon 10
  • 10.1109/sera.2017.7965749
Bidirectional value driven design between economical planning and technical implementation based on data graph, information graph and knowledge graph
  • Jun 1, 2017
  • Lixu Shao + 5 more

Value-Driven Design enables rational decisions to be made in terms of the optimum business and technical solution at every level of engineering design by employing economics in decision making. In order to maximize the business profitability, we propose to bridge bidirectional value driven design between economic planning and technology implementation on the basis of the data graph, information graph and knowledge graph. We use data graph, information graph and knowledge graph to analyze problems that have negative impact on activities of software development including requirement analysis, summary design and detail design. We propose to improve system reliability and robustness by managing data and information reuse, redundancy as well as structure.

  • PDF Download Icon
  • Research Article
  • 10.51358/id.v16i2.728
Revival and Transition: Evolving Roles and Various Forms of Informative Graphics
  • Aug 19, 2019
  • InfoDesign - Revista Brasileira de Design da Informação
  • Tingyi S Lin

Currently, informative graphics has attracted considerable attention for its efficient and effective methods of communication. The ways in which people receive information strongly influence their ability not only to comprehend the information but also to capitalize on the purported benefits of the information. Information-design approaches can facilitate the creation of many good-quality designs. By understanding both the history of information design and pertinent case studies, we can familiarize ourselves with information-design methods and applications. In this article, I discuss the evolution and various forms of informative graphics on the basis of historical, content-oriented, and phenomenological analyses. I investigate visual representations’ roles and identify the strengths and deficiencies of visual communication for information design. Historical evidence—such as Playfair’s graphic charts and his method, Winderlich’s clinical graphics, Nightingale’s statistical graphs, and Snow’s dot maps—show that appropriate visual formats can depict data and that these depictions can demonstrate effects over time. The graphical method can also aid in the measurement of small-time intervals between biological effects. Multiple variables enhance the exploratory power of graphics in relation to analysis and discovery. Narrative graphics associate the spatial dimension with time-series displays to depict space and time. Although many disciplines acknowledge “the importance of communication,” the general public in Taiwan has only recently begun to acknowledge the importance of informative graphics. We should reflect now and then on the evolution and various forms of informative graphics.

  • Research Article
  • Cite Count Icon 3
  • 10.1007/bf01900314
Video-graphic query facility for database retrieval
  • May 1, 1986
  • The Visual Computer
  • Nancy H Mcdonald

The goal of this project was to develop a prototype to demonstrate the use of video and graphic techniques applied to the human-machine interface for data retrieval from a typical computerized database. Data is presented to a user via video and graphic means; queries are formulated in one of several graphic formats; control operations are handled through joystick, touch panel, or single-keystroke maneuvers. To accomplish this, we made use of videodisc, interactive computer graphics, and relational database technologies. Still pictures, video segments, and pictures of text are used as visual cues to a user who indicates interest in a data item in a pointing gesture by touching the panel through which the item may be seen. The user may find the actual data item s/he desires, then pose a query for additional information in one of four graphic query formats. A specially designed database was developed to handle the video and graphic data needed for this user facility.

  • Research Article
  • Cite Count Icon 33
  • 10.1002/wics.145
Parallel coordinate and parallel coordinate density plots
  • Feb 18, 2011
  • WIREs Computational Statistics
  • Rida E Moustafa

The parallel coordinate plot (PCP)—which represents ap‐dimensional data point in Cartesian coordinates by a polyline (or curve) interceptingp‐parallel axes—is a viable tool for hyperdimensional data visualization. It enables the human visual system to spot informative patterns in complex data and gain better understanding of the underlying geometry of hyperdimensional objects. Correlated records, conceptual clusters, and outliers are easy to discern with the PCP. The parallel coordinate density plot integrates the PCP with density estimation techniques to visualize concentrated information instead of the profiles themselves. Thus mitigating the visual cluttering burden inherent in the plot for a few thousand records. In this article, we give an overview of the PCP, their generalizations, the use of orthogonal bases to smooth out the system, and density estimation techniques to overcome the visual cluttering limitations inherent in the plot. We discuss the duality theorem and its usability in identifying patterns visually or by automatic means. We discuss the effect of scaling the data and the profiles. We provide some visualization examples on different datasets.WIREs Comp Stat2011 3 134–148 DOI: 10.1002/wics.145This article is categorized under:Statistical and Graphical Methods of Data Analysis > Statistical Graphics and Visualization

  • Research Article
  • 10.23977/jfmsr.2021.010120
What affects the decomposition rate of fungi
  • Apr 19, 2021
  • Shuhan Cheng

Carbon cycle plays an important role in biogeochemical cycle, in which fungi play an extremely important role. Analyzing the changes of fungi in the process of decomposing lignin and cellulose in soil can help us understand the material cycle and energy flow, and also provide reference for the biological decomposition of waste straw and waste wood. First, we studied the rate at which fungi decompose cellulose and lignin. Based on the knowledge of power function and exponential function, combined with linear programming, we established the general model of fungal decomposition rate. The general model considers the effects of fungi growth rate, lignin content in the soil under initial conditions, temperature, humidity and other factors. By extracting the data from the relevant statistical graph, we get the corresponding function expression. Secondly, according to the statistical graph given in the competition, we extracted the data in the graph, and obtained the model of fungus decomposition rate and its growth rate. We also use the method of controlling variables to obtain the function analytical expressions of fungi growth rate and decomposition rate under different temperature conditions, and analyze the total model established under different weather conditions. Thirdly, according to the data given in the competition, we established the exponential function model of fungi moisture tolerance and decomposition rate by using the fitting method, and analyzed the error in establishing the model. Then, according to the current academic research results, we selected three specific strains and used the linear fitting method to get the function relationship of decomposition rate overtime. By comparing the corresponding linear function coefficients, we analyzed the interaction between different strains.

  • Research Article
  • Cite Count Icon 19
  • 10.1080/02693799608902075
Design of a view-based DSS for location planning
  • Mar 1, 1996
  • International journal of geographical information systems
  • Theo A Arentze + 2 more

The traditional approach in DSS falls short of providing a highly interactive problem solving environment for planning. Often, cumbersome procedures are required to implement optional plans and obtain feedback information. In dynamic graphic systems, the user is able to view different linked graphic representations (e.g., spatial or statistical graphs) of statistical data and interact (e.g., selecting items) with these graphics. In this paper we describe the design of a DSS for planning facility locations, which uses principles of dynamic graphics to achieve a highly interactive problem solving environment. As in dynamic graphic systems, the user interacts with the DSS through active and linked views. However, where views in dynamic graphics are different representations of a given dataset, the views in the DSS are active data structures describing the facility system to be planned from different perspectives. The declarative and procedural forms of knowledge involved are identified by a logical analysis of planning problems. A frame-based formalism is proposed to represent the knowledge contained in the views. The main advantage of this view-based approach is that it offers the user a highly flexible and interactive environment for performing ‘what-if’ analyses

  • Research Article
  • 10.54216/ijns.240416
Neutrosophic Delphi method to analyze the impact of Internships on the comprehensive development of university students
  • Jan 1, 2024
  • International Journal of Neutrosophic Science
  • Nery Nery + 4 more

Internships play a crucial role in the comprehensive education of university students as they provide practical experience and promote the development of technical and soft skills. These practices not only promote personal development but also ease the transition into the world of work. The study aims to use a Neutrosophic Delphi method to analyze the extent to which work practices influence the comprehensive education of university students in Ecuador in 2023. A descriptive study was conducted with a sample of 410 students from academies and universities in Ecuador. Country Ecuador. Center of the country This method uses structured surveys to collect qualitative and quantitative data about the experiences, advantages, and skills acquired during internships. The results are presented in the form of data tables and statistical graphics that illustrate the close connection between professional experience and the overall educational level of students. Emphasis was placed on acquiring skills such as teamwork, leadership, and problem-solving. In summary, internships are a valuable learning tool for university students as they provide the opportunity to apply knowledge, develop skills, and improve their employability.

  • Book Chapter
  • 10.4324/9781138609877-ree148-1
Data Representations and Visualizations in Educational Research
  • May 30, 2022
  • Ting Dai + 2 more

The graphical representation of quantitative information is not a modern development, but rather it can be traced back to the earliest map-making and, later, thematic cartography and statistical graphics (Friendly, 2008). The early 19th century witnessed the invention of all major forms of statistical graphics, including the ever so popular pie and bar charts, histograms, line graphs, and scatterplots. At this time, data from a wide variety of domains (e.g., economic, social, medical, physical) began to be depicted, and a wide range of novel techniques were used to facilitate data representation. At the same time, graphical analyses of natural and physical phenomena made regular appearances in scientific publications. In the second half of the 19th century, there was a rapid growth in the visualization of data: the importance of numerical information for public policy, industry, and health was acknowledged, and the various applications of statistical theory and methods made it easier to make sense of large bodies of data. This period has been referred to “the Golden Age” of data visualization (Friendly, 2008, pp. 12–13). Another historically critical period of the development of data visualization is between 1950 and 1975 (Friendly, 2008). In this period, data analysis began being recognized as a distinct branch of statistics by the international research community and significant advances were made in the area of computer processing of statistical data, interactive statistical applications, and digital graphic technologies. Since the mid-1970s, data visualization has blossomed into a vibrant multi-disciplinary research area. It features characteristics such as highly interactive statistical computing systems, advanced visualizations of high-dimensional data, and substantially increased attention to the cognitive and perceptual aspects of data display. Data representations and visualizations have also become commonplace in the applied work of a variety of professions. Scientists, for example, use data visualizations to make sense of trends within their research that employs mathematical and statistical models of phenomena and make such results understandable by others. Engineers use data representations to monitor environmental, commercial, and industrial processes. Historians and journalists also utilize data representations and visualizations to communicate information from a myriad of sources, including textual data. Finally, more recently, individuals – even students – have begun to use data representations and visualizations to understand aspects of their lives, such as their wellness and finances (Lee, Choe, Isenberg, Marriott, & Stasko, 2020). Not left behind the data revolution are educational researchers, who use data representations and visualizations, which we use synonymously in this article, for many of the same reasons as other professionals and non-professionals – to understand and communicate results effectively. While many engage with data representations and visualizations, a focus on the effectiveness of their design has often been ignored (Wilkinson, 2005). However, after a period where data representations and visualizations were seen by statisticians as “a minor subfield and are not well-integrated with larger themes of modeling and inference” (Gelman & Unwin, 2013, p. 1), many professionals are beginning to take representation and visualization seriously. This is evidenced by the recent theoretical and practical work that is being done by the likes of Healy (2018), Wickham (2016), and Wilkinson (2005). Moreover, there is research and recent work in the broader fields of computer science, statistics, and sociology, to name a few, that can inform how we, as educational researchers, go about creating data representations and visualizations effectively. Finally, as we begin considering effective ways to represent and visualize data, it is important to consider findings from educational, psychological, and developmental research on how people interpret data representations and visualizations as we make related decisions. Thus, there is, presently, a greater focus on the effectiveness of data representations and visualizations, a focus which we aim to highlight and demonstrate through this article.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.