Abstract

Large numbers of audits have shown that we are inundated with faked studies, poorly done studies, improperly massaged data, sales pitches, etc. Few of the major studies can be replicated, and many journals still refuse to publish replications – especially if they don't support the original study's results. Thus, the way we need to critique studies has shifted from a relatively straightforward evaluation of the study to a detective process, including evaluating the author(s) and the journal in which the study appeared.
 This set of criteria is only applicable to research studies using human or non-human subjects. Studies appropriate for applying the following criteria can be from any area within psychophysiology, including clinical, sports, education, military, etc. It is not for theoretical articles, thinly veiled sales pitches, etc. The critique process is active and generally involves more than reading an article then accepting its conclusions at face value: The person critiquing a research article needs to gain some perspective on the area the article discusses, the authors' qualifications and experience (are they sales folk selling something, etc.), the literature the authors included in their review as opposed to what is published, etc. It is also likely that the critiquer will be checking the statistics and other crucial portions of the article by using statistical software.

Highlights

  • Introduction & literature reviewDoes the introduction and literature review differentiate between actual studies and sales pitches?Is the basic idea of the study plausible or so far from anything that makes sense that you would have a difficult time believing the results? If this is the case, do the authors present a reasonable case for presenting proof for an extraordinary idea? In other words, they need to convince you that they did the study and had sufficient safeguards against data manipulation and cheating to have gotten the results they claim

  • Are the conclusions valid and justified given the actual results of the analysis and the study’s limitations?

Read more

Summary

The Journal

Remember that there are so many thousands of journals that anybody can get any “study” published. Many of the journals are predatory (charge to publish) and have fake peer-reviews. These journals will publish anything submitted to them. Is the journal in which the article was published appropriate for the audience or a very odd journal choice?. Does the journal seem to be peerreviewed with a reasonable impact factor or a predatory journal that will publish anything? If the journal does not have an impact factor, it means that virtually nobody is citing articles from the journal. Very legitimate specialty journals such as Applied Psychophysiology and Biofeedback are read by far fewer people and cited by fewer authors than the top general scientific journals. While the New England Journal of Medicine has an impact factor of about 75, Applied Psychophysiology has a relatively

Introduction & literature review
Methods
Are they reliable and valid for the population being studied?
Can you tell the method used for randomizing?
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call