Abstract

Research in information and communication technology in education places an increasing emphasis on the use of qualitative analysis (QA). A considerable number of approaches to QA can be adopted, but it is not always clear that researchers recognize either the differences between these approaches or the principles that underlie them. Phenomenography is often identified by researchers as the approach they have used, but little evidence is presented to allow anyone else to assess the objectivity of the results produced. This paper attempts to redress the balance. A small-scale evaluation was designed and conducted according to ‘pure’ phenomenographic principles and guidelines. This study was then critiqued within the wider context of QA in general. The conclusion is that pure phenomenography has some procedural weaknesses, as well as some methodological limitations regarding the scope of the outcomes. The procedural weaknesses can be resolved by taking account of good practice in QA. The methodological issues are more serious and reduce the value of this approach for research in collaborative learning environments.DOI: 10.1080/09687760600837058

Highlights

  • This paper presents a critical review of phenomenography as a qualitative research process for use within information and communication technology in education (ICTE)

  • In most published studies that have used this approach to qualitative analysis (QA) the emphasis is on the conclusions reached, and little consideration is given to the research process or to factors that might restrict the validity or generalizability of the conclusions

  • The first two sections of this paper provide an introduction to phenomenography and the evaluation study

Read more

Summary

Introduction

This paper presents a critical review of phenomenography as a qualitative research process for use within information and communication technology in education (ICTE). On a larger scale, where a larger number of cases have been collected, some tests have been conducted to demonstrate multi-coder reliability These are to predetermined coding frameworks, at the end of this phase, and they do not test whether different phenomenographers would produce the same coding structures when presented with the same set of accounts Phase 3 should be more difficult to justify: by the end of this phase the outcome space should be independent of the original accounts, and so the meaning of any terms must become ‘fixed’ by other parts of the process—but must only be established from patterns within the data When this phase of the evaluation study was reviewed during the teachback process, some anomalies arose over the concepts of complexity and inclusiveness within the final model. Neither should the justification be post hoc—that only informs you about the researcher’s values, or why certain ‘authorized’ versions are so readily found in educational studies

X: A X: C X: B and C X
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.