Abstract

This article shares the problem-solving process and resultant rapid sensemaking methodology created by an interdisciplinary research team faced with qualitative “big data.” Confronted with a data set of over half a million free text comments, within an existing data set of 320,500 surveys, our team developed a process to structure the naturally occurring variability within the data, to identify and isolate meaningful analytic units, and to group subsets of our data amenable to automated coding using a template-based process. This allowed a significant portion of the data to be rapidly assessed while still preserving the ability to explore the more complex free text comments with a grounded theory informed emergent process. In this discussion, we focus on strategies useful to other teams interested in fielding open-ended questions as part of large survey efforts and incorporating those findings as part of an integrated analysis.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.