Abstract

Design based on credible evidence depends on the data that underlie the research. Researchers gather data based on explicit study methods. Without data, there is no evidence, no meaningful result. Is there enough data? Were the data properly gathered? Do the data actually answer the question that was asked? Was it the right question in the first place? Do the data support the hypothesis? Are the data polluted by confounding variables? These are the kinds of questions researchers must ask themselves, especially when engaged in positivist quantitative experimentation. When the topic is an aspect of environmental research, we know that there are an astonishing number of potentially relevant parameters and variables, most of which cannot be accounted for in an experimental study.In a study of stress using cortisol as a physiological marker, participant responses are compared after viewing projected images of distressing scenes, such as auto accidents, and calming scenes of nature (Ulrich et al., 2006). Is the participant sample of sufficient size to draw conclusions from the study? Do the data collected by the cortisol measure provide an appropriate proxy for stress in the participants? Is the study asking the right question in order to determine hospital patients' responses to a view of nature from the window in a patient room? By itself, it could be difficult to draw conclusions from such a study. Because the circumstances of the study are quite different from the circumstances of a real patient's room, we accept the study as a way of partially answering a complex question, or providing support for a prior study, such as Roger Ulrich's (1984) pioneering study comparing medical records of surgery patients who had views of a brick wall with those who saw treetops.These issues of data and conclusions are extremely important for researchers. Doctoral students are taught that quantitative studies must achieve rigor through validity and reliability. They use quantitative measures in carefully planned experimental studies to test clear hypotheses and they must work with a sample that is sized to offer statistical power. The numerical outcome data of quantitative studies can be examined with statistical methods and illustrated with charts and graphs. Reliability is the degree to which the results accurately portray the total population over time, and the degree to which the results can be reproduced using the same methods. Validity, on the other hand, pertains to whether the results are an accurate measure of what was intended to be measured. More specific variations of validity include construct validity, which examines overall framework and hypothesis to see if the measures are appropriate for the question; internal validity, which confirms that the conclusion is warranted and lacks bias; and external validity, which assesses the ability to generalize from the study to other situations.In determining rigor in qualitative research, which constitutes a large portion of social science research and environmental research, different methods are used and different kinds of data are produced, yielding results that are not mathematical or statistical. Yvonna Lincoln and Egon Guba (1985) have suggested that, lacking data from numerical and statistical sources, rigor is determined for qualitative studies by:1. Credibility of the study methods and prolonged engagement with the study matter;2. Dependability or the ability to duplicate similar results based on careful documentation of the study methods;3. Confirmability based on a clear audit trail of the process; and4. Transferability or the ability to use the results in other situations.Qualitative studies produce ample relevant data, just not the types of mathematical and statistical data produced by quantitative methods.There are some new and provocative ideas about the changing world of data. I recently finished reading Big Data (2013), the current bestseller by Viktor Mayer-Schonberger, a professor of Internet governance and regulation at the Oxford Internet Institute at Oxford University, and Kenneth Cukier, data editor for The Economist. …

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.