Rethinking Scientific Publishing
Rethinking Scientific Publishing
- Research Article
3
- 10.15252/embr.201540721
- Jun 10, 2015
- EMBO reports
Web 2.0 and academic debate: Social media challenges traditions in scientific publishing.
- Research Article
1
- 10.15200/winn.142557.78481
- Jan 1, 2015
- The Winnower
Now I am become DOI, destroyer of gatekeeping worlds
- Research Article
5
- 10.1097/01.numa.0000437778.30595.be
- Jan 1, 2014
- Nursing Management
FigureThe concept of peer review can be interpreted very differently among nurses and healthcare professionals. Some think of a subject expert reviewing a manuscript for a journal's editorial staff. Others think of regulatory bodies requiring hospitals to have an internal process to ensure that healthcare team members are competent and able to perform within their scope of practice.1 Still others think of peer review as a quality assurance process in which healthcare team members audit each other's documentation to validate care standardization. We provide insight into a nursing peer review process designed to evaluate performance, and the journey to its implementation. A well-defined peer review process and tool, utilized in conjunction with a nurse's annual performance evaluation, is one way to infuse meaningful peer input into a performance appraisal. This system allows nurses to provide insight into one another's strengths and opportunities for growth. A detailed approach was used to create, develop, and sustain a nursing peer review program that's flexible enough to be used by all staff members within a pediatric hospital system. In addition to promoting professional growth among nursing staff, this process also meets the current peer review standards set by the Magnet Recognition Program® and The Joint Commission. The ultimate goal of sharing this information is to help other organizations that are just beginning the peer review process and those that have struggled in the past with development and implementation to bring about a sustainable change. This, in turn, will promote nursing cohesiveness and professionalism as we work together to bring healthcare into a new era. Organizational standardization Our organization embarked on an initiative to create a standardized peer review process that would be utilized by all nursing departments. The process needed to be integrated organizationally and applicable to clinical nurses within both inpatient and outpatient environments. This journey began with a review of the current literature and an examination of the current peer review practice at other Magnet® facilities. Through this process, it was discovered that the majority of these facilities utilized and defined peer review in a variety of ways. We identified variation in the management of the peer review process within our own organization. Acknowledging the vast array of discrepancies within and external to the organization, a task force of clinical nurses was developed to redefine the way our organization administered the clinical nurse peer review process. This task force also included two leadership liaisons who served as resources to the clinical nurses during this development and provided insight into the management side of the peer review process. There were four steps in our process: (1) defining a peer, (2) developing a peer review form, (3) transforming the process, and (4) implementing the process. Who's a peer? The first step in our work involved evaluating, discussing, and reaching a consensus on the definition of peer. In order to standardize the peer review process, the task force recognized a true definition was needed to measure success with the new process. Looking at previous peer review practice in the organization, many nurses and department directors had different definitions and ideas of what it meant to be a peer. Some departments included other disciplines in a nurse's specialty for evaluations, whereas others utilized only fellow nursing staff members. The task force agreed that the purpose of peer review is to foster professional growth and development among staff members by utilizing a process through which measurable outcomes are assessed. After much dialog, the task force adopted the definition of peer utilized by the American Nurses Association, which defines a peer as an individual of the same rank or standing according to the established standards of practice.2 Form development The next step in redefining the peer review process was to develop a new peer review tool that could be transferrable and applicable among the various nursing specialties and departments. Key areas identified by the task force for development of this tool included creating a short, concise form that's easy to understand with limited directions and applicable to all clinical nursing departments. To achieve this goal, the task force worked collaboratively to establish eight domains that would provide nurses with a peer evaluation framework. These domains were established through open dialog, including question-and-answer sessions, review of current job descriptions, and evaluation of other tools utilized by various Magnet facilities. Through this work, a common set of core expectations were identified and utilized in the development of each domain. These domains encompass the essential clinical nurse job functions, behavioral competencies, and basic roles and responsibilities throughout the organization. The domains can be found in Table 1.Table 1: The eight domains of peer evaluationTo further guide and support high-quality peer feedback, three to five specific, measurable objectives were created and listed under each domain for nurses to measure performance. These objectives were developed to guide peers in evaluating each nurse by providing focused, pertinent feedback. The task force wanted to eliminate vague, nonspecific feedback that didn't facilitate the identification of future growth opportunities. The objectives created by the task force assist peers in identifying evidence from the nurse's daily work, communication, time management, and interdisciplinary interactions. Each domain also included a comment field, allowing nurses the autonomy to provide open-ended feedback and elaborate on outstanding work or opportunities for the employee's growth and improvement. The comment section was extremely important in the solicitation of meaningful feedback because it allowed nurses to provide examples to reinforce the ratings selected for the objectives. The directions on the new tool clearly state that comments are mandatory for certain ratings to allow for elaboration and examples. The task force believed a Likert rating scale was essential to standardize the peer review process. The previous process utilized a vague, numerical score that hadn't historically provided nurses with adequate descriptions of their work ethic and performance. Lower scores were considered negative responses, whereas higher scores equated to positive responses. After review of the current literature, the task force employed a 4-point Likert scale comprising "not met," "approaching," "meets expectations," and "exceeds expectations." In addition to the scaled questions, an open-ended question "Do you feel comfortable working with this nurse?" was added to the form. Utilizing a mixed-method methodology, Likert scale, and open-ended question format allowed the evaluating nurse a better opportunity to provide real-life contextual examples related to the evaluated nurse's care.3 Process transformation After the peer evaluation tool was created, the task force focused its attention on creating a framework to aid departments in implementing the new peer review process. Throughout our hospital, there are units of vastly different sizes. Some nursing departments have four to six clinical nurses, whereas other departments have as many as 150 to 200 clinical nurses on staff. The variability in staff sizes meant that the task force had to be creative in determining how many peer reviewers should evaluate each nurse annually and how these reviewers should be selected. With the new process, each nurse receives feedback from two to four RNs. Limiting the number of peer evaluations eliminated the previous dissatisfaction and/or barrier of evaluators being asked to fill out 20 to 30 peer evaluations a month due to exceedingly large nursing departments. The task force also allowed clinical nurses to select one to two peers of their choice to provide feedback; the management team selected the remaining peers. Implementation After the task force completed the new peer review tool and process recommendations, the nursing department directors and the CNO gave the approval for implementation. The standardized peer review form, along with the revised process, was implemented using various educational modalities. The first presentation was provided to the inpatient and outpatient nursing directors to inform them about the new form and the revised process. The directors were provided with handouts outlining the changes and educational fact sheets for the staff members to use as a reference. The task force increased its availability to ensure educational consistency to all clinical nurses by attending and presenting at unit-based councils, charge nurse meetings, and department-wide staff development programs. The feedback received from the different educational presentations was extremely positive. Feedback discussed the tool's ease of use, applicability to all nursing departments, appropriate form length, and measurable objectives that allowed nurses to comment on focused job roles and responsibilities. Other feedback included the improved functionality of the new rating scale and the process change that limited the number of requests for peer evaluations. Some directors expressed resistance to changing their current peer review practice. Certain directors thought feedback from four nurses wasn't enough if the department had a high number of nursing staff members. After an open discussion with the task force chairperson and the directors of large departments, it was agreed that meaningful feedback from four people would be adequate. Other directors mentioned that they liked the idea of adding specific clinical skills for peer evaluation. However, we couldn't add specific skills because they wouldn't universally apply to the various nursing department specialties. The task force collected all comments and feedback, and created a frequently asked questions document to address these concerns and explain the thought process behind the decisions made about the tool and recommendations. Directors were then provided with answers to and rationales for their specific questions and concerns, creating a consistent message and clarity to all departments. All education and implementation occurred over the course of 4 months. During this time, the task force worked with web development to formulate an electronic version of the peer evaluation document for the purpose of online submission. The electronic document was widely popular because it eliminated the use of paper and facilitated tool access for all clinical nurses. Staff members were able to complete the tool online and submit the evaluation through the organization's intranet directly to the person who requested the feedback. The task force also concluded that in order for this new tool and process to remain functional and meaningful, continued evaluation of its use would be essential. Reaching the top As we diligently work to move the nursing profession forward, it's important to remember that peer review can positively impact not only an individual's nursing practice, but also an entire hospital system. Nurses are at the forefront of healthcare transformation and are integral to sustainable practice improvement. By empowering clinical nurses to lead this initiative, we've been able to successfully introduce, develop, and support a valuable peer review process across our organization. Utilizing a comprehensive peer review tool can help organizations improve patient outcomes and patient satisfaction.
- Dataset
- 10.15200/winn.145789.92186
- Mar 13, 2016
Ten ‘Personal’ Reasons why I am skeptical about Open Access (OA): Thoughts of an Individual Researcher Although a great deal of hurdles are overcome with so many ‘models’ of Open Access (OA), I do have a lot of a concerns about it when it comes to Individual research in this collaborative world. Everyone would acknowledge that there are good and bad OA publishers- who have done fairly a great job and those who have largely failed. And this is true for subscription journals as well. I enumerate my personalized opinions on OA, one by one: 1. Pay to Publish model: This would be my biggest concern as a PI, in future esp. when my Institution would not have OA journal subscription or support. As a PI in a resource limited laboratory, in any given country, I would not be to pay the hefty OA fee of $ 500-$ 2/3,000 and about. In order to make research accessible to ‘all’ why would I pay a commercial business, this is a huge amount. If I wish to contribute a single-authored review of importance, why (and most importantly, how?) would I cough up that huge amount of money? Being asked a few times over survey by new OA models started by research social networking websites, I would shy away for the sheer reason of involved expenses, outreach, limited subscription to these websites by research area-leaders and so on. Paying a lot more to sell one’s own work, give away the rights and not to mention a far more complicated system of submission to publication than ever before. 2. OA vs Predation: Without even going into Jeffrey Beall’s perpetually growing list of subscription or predatory or OA listed journals, it is very much evident that many a times the border between all these three are missing- as experienced as an author. OA advertising as fastidiously as any other predatory counterpart. A great study enlisted next to a bogus one in an OA journal’s index. As long as it is OA, quality goes for a toss and we approve of it? 3. OA is not blind: That is not all is fair, like any other subscription or orthodox journals, single-blind and open reviews, pre-publication or post-publication reviews can leave a bad taste as an ‘author’ like any other model. It has all the signatures of failures of peer review like any other publishing model. So, why the hype, at all? 4. What OA makes and what it does: Article submitted, reviewed, OA fee paid; then still why the authors have to format references (the most bogus side of academic efforts which is worthless, least said in terms of efforts and time!), check the galley proof, sign and return this document and that. For that ‘fat money’ would not these be warranted? What is the OA publishers doing with that money- paying their staff (are there enough of them? If not then why not make submission to publishing process faster). 5. Reviewers are Free, though: No matter when I review for subscription or OA journals, I ‘have to provide free service’ within stipulated time. No incentives for reviewing! This needs a sea change for scientific ‘policing’, serious and honest peer-review, getting rid of junky research papers. Why as a Reviewer I must care if it is an OA journal or not. But, the review days given by the journal office are strangulating, annoying, and unaccomodative most of the times. In addition, no printing charges borne by the journal, no distribution expenses, authors pay and reviewers are free- where all the money goes? 6. OA models with “NO” rejection: From personal experience, I must say that a bunch of journals in their peer-review process lack a “reject” button altogether for the submitted articles! This is alarming, and the talk of the day for most academicians I meet. From Journals named “P” to “F” (on conditions of anonymity, and for people with learned guessing skills!) all suffer through this syndrome. Some articles are sometimes beyond repair from submission, but end up being “right there” with “a horrendously iterative process of peer review process in place” where the “Editor” is a mere onlooker of the fiasco/ farcical review process. Is OA all about obtaining a digital object identifier (DOI) for $ X? For any quality of research? 7. OA diluted and contributed to predation?: Form Web-of Science, to DORA, to SCI, to Google Scholar indexing everything seems to say that everything is OA and difficult to perceive as to why everything OA is indexed here and there? Lost among so many of true and fake OA. Where it is being archived to with whom also diluting the intended target of OA? In the name of OA, how many Emails plague our mail boxes every day? Is predation surviving on the OA’s “name-shake”? 8. OA must publish review comments/ process compulsory “open”: Unless this is done, the tax-payer does not get to see the evolving process of science, publication, peer-review; but only ponders on the final product. This would be another way to make the peer review process serious and responsible. In this regards, retractions-websites and “X”-facts website would not be necessary to enlighten the scientific community on lack of reproducibility. 9. OA does not ensure robustness/ reproducibility in science: Like any other models, the flaws and pitfalls in the investigation/ study/ met analysis/ experiments/ fraud can exist. Albeit, instead of getting concealed in printed hard copies of journals in some obscure library to hiding behind an expensive pay-wall, they would be glaring in public sights. Does not ensure though the leakage of poor science and bad peer-review process. 10. OA journals lack the Oomph of Traditional but Elite status: They are the new kids in the block, can make you publish a lot, quickly but then when it comes to matching the grant, funding and tenure- obtaining or career-changing publications in so called “elite” journals, they are no match. Simply put as that. Because still the academic hiring and HR systems run on “impact” and “indices”, those who judge and form committees are from “the-then systems”, and nothing would change overnight even if OA was intended to address it, and not just profiteering. Something is “twitted faster” (aka. Journal house themselves!) does not mean is “catchy, important, crowd-pulling, or sustainable”. Also, I must wrap up by saying that we do NOT live in a perfect world and everything has pros and cons, but then are we “weighing in a lot”, in exchange of just ‘free access’ to articles!
- Research Article
- 10.15200/winn.140984.44268
- Jan 1, 2014
- The Winnower
AAAS misses opportunity to advance open access
- Front Matter
1
- 10.1016/j.xjidi.2021.100056
- Sep 1, 2021
- JID Innovations
JID Innovations and Peer Review
- Research Article
- 10.15200/winn.140865.54468
- Jan 1, 2014
- The Winnower
Open letter to the Society for Neuroscience
- Research Article
14
- 10.1016/s0140-6736(98)90307-5
- Mar 1, 1998
- The Lancet
Peer review on the Internet: A better class of conversation
- Research Article
25
- 10.1016/j.neuron.2014.03.032
- Apr 1, 2014
- Neuron
The vacuum shouts back: postpublication peer review on social media.
- Research Article
9
- 10.1002/1873-3468.12792
- Sep 1, 2017
- FEBS Letters
Evening trails by Brian Smithson. Shared from https://www.flickr.com/photos/smithser/10709730483/ under a CC BY 2.0 license Peer Review involves several divergent groups. Authors, who wish to see the results of scrupulous work published, in the anticipation of career promotion or secure funding. Editors, who are under pressure to identify sound and novel research. Reviewers, who try to fit a thoughtful and time-consuming process within busy schedules, often receiving no credit for it. And Publishers, who compete in a transforming landscape of fast and abundant science publishing. But has Peer Review always been as challenging as it is nowadays? Some might argue so, though phrases such as 'Publish or Perish', or the 'Impact Factor rat race' are relatively new additions to a Researcher's vocabulary. Moreover, the frequently observed focus of science policies on short-term goals and the publication speed enabled or even imposed by our information age are all new variables. Worryingly, advances in online handling systems have actually enabled author and reviewer misconduct, as in the recent case of 101 papers that went through fake peer review according to the Ministry of Science and Technology of China [1]. This scenery is by no means comparable to the scientific landscape of 1960s when regular Peer Review was first introduced by journals [2, 3]. Although several changes have been introduced to the Peer Review system to address current challenges, the Peer Review Process has been until recently preserved in its primary version: metadata on Peer Review efficiency have been scarcely available; implementation of emerging technologies to detect issues such as inappropriate statistical analysis, figure manipulation and data fabrication has been limited; Peer Reviewers lack systematic training on conducting constructive and ethical peer review; and incentives for performing high-quality peer review are often scarce. However, this scenery is now changing, and the first steps towards Peer Review modernization have already been made! The realization that systematic research on Peer Review needs to be conducted is growing. Already in July 2017, Carole J. Lee and David Moher, philosophers specializing in Publication Science at the University of Washington and Ottawa Hospital Research Institute, respectively, published in Science a call for promoting scientific integrity via research on Peer Review [4]. Lee and Moher call Publishers to enable the opening of what they name 'the black box of peer review'. They highlight the need for large systematic experimental studies to assess the various peer review practices and improve quality and transparency. Going a step further, they present incentives for Publishers to adopt new software systems for assisting Peer Review, evaluating Peer Review Efficiency and producing metadata on, for example, reviewer scores and comments linked to manuscript content, or reviewer expertise and affiliations. The suggested incentives are hard to ignore: taking into consideration the persisting plea to de-emphasize impact factor, as well as the need to exclude predatory journals, introduction of indicators for Peer Review transparency will soon gain momentum. The wide application of indicators for Peer Review transparency would provide strong incentives for a Publishers' initiative to support research that improves the quality of the Peer Review process. Such a collective initiative could even be supported through updating the Transparency and Openness Promotion (TOP) Guidelines to facilitate journal meta-research. One might argue that this is yet another idealistic call. However, reality might disprove them. The upcoming International Congress on Peer Review and Scientific Publication, which was launched by JAMA and BMJ back in 1986 to encourage systematic research on Peer Review [5, 6], features a rich programme on metadata analysis, and will host several journal editors and publisher representatives. This is only an indication that the community is responsive to the general need for metadata analysis of Peer Review. In addition, the implementation of Open Peer Review by several publishers increases metadata availability. Through Open Peer Review the identities of both Authors and Reviewers are made public upon manuscript acceptance, often along with the Peer Review reports. EMBO Press journals and Nature Communications publish reviewer reports along with manuscripts on an optional basis [7, 8]. Similarly, BMJ [9], F1000Research [10] and about 70 journals from BioMed Central [11], all use variable models of Open Peer Review but share a common feature: they publish reviewer reports and names alongside the published article. In addition, Publishers are engaging in lively discussions on the future of Peer Review. For example, in 2016, BioMed Central and Digital Science organized a 1-day conference, the SpotOn London event, at which participants actively discussed 'what peer review might look like in 2030'. This event led to the production of an informative report [12], and will hopefully help trigger more discussions, when repeated later this year. In parallel, Chris Graf, Research Integrity and Publishing Ethics Director at Wiley, has recently indicated openness to change: 'our plan at Wiley […] includes continuous work […] to collect and record data on trends [and] Editorial initiatives and experiments designed to improve transparency and openness, and thus reproducibility. We are pleased to be an organizational signatory to the TOP guidelines. We publish several journals – the European Journal of Neuroscience in particular – that are leading proponents for 'two-stage' peer review adopted in the Registered Reports model. We will continue to encourage and support journals that experiment with this model, and others like it' [13]. So, it seems that publishers are ready for a change. Most importantly, they are not alone in this effort. Policy makers also seem to recognize the need for systematic research on Peer Review. For example, the EU has initiated the COST Action 'PEERE: New frontiers of peer review'. PEERE is a multidisciplinary international project running from 2014 to 2018 that involves quantitative and qualitative research on how peer review is conducted, seeking to make evidence-based recommendations for improving the current system. Thorough Peer Review seems to obstruct fast publication and vice versa. However, when it comes to time limitations faced by both Reviewers and Journal Editors, several emerging technologies can lend a helping hand. Some Publishers have already integrated technologies to waterproof the Peer Review Process: AAAS has recently launched a new web-based service, Peer Review Evaluation (PRE), that ensures adherence to journal Peer-Review policy, excluding fake or biased reports [14]. Probably most people engaged in Publishing have heard of Crossref, a web-based tool widely used to detect plagiarism issues, among other purposes. Similar technologies already exist to screen manuscripts for statistical credibility, image integrity and, even, data fabrication. Penelope was first conceived by its creator James Harwood as a web-based tool that would assist authors to adhere to guidelines when preparing a manuscript [15]. Upon collaboration with the equator network, the startup company Penelope Research has developed a product that might transform the future of Peer Review. Penelope is presently available to Publishers for checking manuscripts during submission. The checking includes, among others, the presence of appropriate formatting, the figure and table legends, references, or ethical statements. Importantly, Penelope checks for proper statistical analysis and can identify cases in which raw data of a manuscript had been previously published elsewhere. StatReviewer has recently emerged as another promising tool to assist the review of manuscripts for statistical soundness. It uses machine learning to control adherence to widely accepted statistical and methodological guidelines, such as CONSORT 2010. StatReviewer is currently being evaluated in an initial pilot study involving four clinical BioMed Central journals [16]. Would these developments mean that we are heading towards a fully automated Peer-Review system? As Chadwick C. DeVoss, Founder and President of StatReviewer argues in the 2016 SpotOn London report, 'this is where a slippery slope gets extra slippery' [12]. A fully-automated publishing line would only undermine exchange and communication between researchers. On the contrary, the aim here is solely to implement emerging technologies as a helping tool that will leave authors and reviewers valuable time for rewarding interactions. Researchers are being thoroughly trained for conceiving a project and experimenting in a wet or dry lab. Most postgraduate students also receive regular training on writing a manuscript and preparing scientific presentations. But what about conducting Peer Review? Considering the major impact of reviewing quality on scholarly publishing, minimal concerted efforts have been so far invested on training good Reviewers. With advice mainly restricted to journal guidelines and some editorials, budding Peer Reviewers seem to be in need of more support when expected to produce a good referee report. More than that, there is still an open debate on whether young PostDocs should be given the possibility for getting some hands-on experience with Peer Review. Lack of opportunities for young scientists to train on supervised Peer Review is rather unfortunate given the fact that most are open to both expanding their expertise and investing time on a scientific exchange as rewarding as Peer Review. Yet, both individual journal initiatives and a new platform dedicated to Peer Review have set the goal of changing this scenery. For example, the Genetics Society of America has just initiated a pilot programme for training reviewers on the 'principles, purposes, and best practices of Peer Review'. Other Publishers opt for providing online packages with Peer Review training Material, including Webinars, as is the case with the Wiley Peer Review training. Going one step further, Publons, a platform focusing on Peer Review and aimed at Authors, Reviewers, Editors and Publishers alike, has developed the Publons Academy, a practical Peer Review training course for early career researchers. Publons Academy not only provides extensive material on evaluating novelty, methods, results, conclusions and ethical integrity in a manuscript but also enables supervised postpublication Peer Review. Experienced Reviewers are encouraged to sign up as supervisors and get acknowledged for this role. Acknowledgement is actually the groundbreaking component that has already earned Publons a constantly growing community of Reviewers, Editors and Journals. Reviewers get credit for every single report they prepare (even when the manuscript does not get accepted for publication). This sole fact is crucial for transforming Peer Review: as indicated in a survey conducted by Wiley, four of five researchers agree that reviewing is not sufficiently acknowledged by journals and research institutions [17]. With the ambition to transform this landscape, Publons provides credit, in the form of a verified peer review record, that can be considered as credential during internal institute evaluations, or at job and grant applications. The provision of such incentives may help to build a universal reviewer database. Such a database would assist journals to expand their pool of Reviewers that currently seems to be biased towards senior, male, American and European researchers, and is by no means representative of the current researcher and author demographics. Several developments occurring in parallel seem to steer towards a brand new future for the Peer-Review process. Whether this is an optimistic or realistic view, we are all to find out soon. But what are the actual views of researchers on burning questions about Peer Review transparency, quality and recognition? In times of change, it is essential for the main players to become vocal and put forward criticism and suggestions for solution. How do Reviewers view models of open Peer Review to ensure transparency? Do Authors and Reviewers welcome the use of emerging technologies as a part of the peer review process for the sake of faster publication? What do they consider as nonreplaceable values of nonautomated Peer Review? Do Authors profit from Peer Review more than is generally admitted and acknowledged? How do early career investigators feel about their inclusion in this system? Are senior researchers that have experienced the evolution of the Peer Review System over the past 20 years prepared to supervise younger colleagues? The debate is still on, and the more views available, the more can the system improve.
- Research Article
- 10.7828/ajob.v5i1.534
- Jan 30, 2014
- Asian Journal of Biodiversity
Editorial Policy
- Front Matter
4
- 10.1016/j.bja.2021.02.024
- Mar 29, 2021
- British Journal of Anaesthesia
Preprints in perioperative medicine: immediacy for the greater good
- Research Article
1
- 10.5204/mcj.38
- Jun 24, 2008
- M/C Journal
This article investigates the discourses of academic legitimacy that surround the production, consumption, and accreditation of online scholarship. Using the web-based media and cultural studies journal (http://journal.media-culture.org.au) as a case study, it examines how online scholarly journals often position themselves as occupying a space between the academic and the popular and as having a functional advantage over print-based media in promoting a spirit of public intellectualism. The current research agenda of both government and academe prioritises academic research that is efficient, self-promoting, and relevant to the public. Yet, although the cost-effectiveness and public-intellectual focus of online scholarship speak to these research priorities, online journals such as M/C Journal have occupied, and continue to occupy, an unstable position in relation to the perceived academic legitimacy of their content. Although some online scholarly journals have achieved a limited form of recognition within a system of accreditation that still privileges print-based scholarship, I argue that this, nevertheless, points to the fact that traditional textual notions of legitimate academic work continue to pervade the research agenda of an academe that increasingly promotes flexible delivery of teaching and online research initiatives.
- Research Article
5
- 10.1097/mlr.0000000000001116
- Jun 1, 2019
- Medical Care
Anatomy of Constructive Peer Review.
- Research Article
- 10.7557/5.4281
- Nov 20, 2017
- Septentrio Conference Series
Most of the journals in Croatia adopted the open access (OA) model and their content is freely accessible and available for reuse without restrictions except that attribution be given to the author(s) and journal. There are 444 Croatian scholarly, professional, popular and trade OA journals available in the national repository of OA journals Hrcak, and 217 of them use peer review process as the primary quality assurance system. The goal of our study was to investigate the peer review process used by the Croatian OA journals and the editors’ attitude towards open peer review.An online survey was sent to the Hrcak journal editors with 39 questions grouped in: journal general information, a number of submitted/rejected/accepted manuscripts and timeliness of publishing, peer review process characteristics, instructions for peer reviewers and open peer review. Responses were obtained from 152 editors (141 complete and 11 partial). All journals employ peer review process except one. The data were collected from February to July 2017.The majority of journals come from the humanities (n=50, 33%) and social sciences (n=37, 24%). Less represented are journals from the field of biomedicine (n=22, 14%), technical sciences (n=16, 11%), natural sciences (n=12, 8%), biotechnical sciences (n=10, 7%) and interdisciplinary journals (n=3, 2%). Average journal submission is 54 manuscripts per year, but there are big differences among journals: maximum submission is 550 manuscripts, and minimum just five. In average journal publishes 23 papers after the reviewers’ and editors’ acceptance. In average it takes 16 days for sending the manuscript to the reviewer, 49 days for all the reviewers to send the journal a detailed report on the manuscript, 14 days to the editors’ decision, and another 60 days for the paper to be published.External peer review process where reviewers are not members of the editorial board or employees of the journal’s parent institution was used by 86 journals (60%). Other journals use external peer review process where reviewers are not members of the editorial board but could be employees of the journal’s parent institution (n=40, 28%), and editorial peer review. Remaining 10% journals combine previous three types of the peer review. Only 20% journals use exclusively reviewers from abroad, 44% are combining international and national reviewers, and 36% journals use only reviewers from Croatia.The majority of journals provide two reviews for each manuscript, and the process is double blind. Detailed instructions for peer reviewers are provided by less than half of the journals (n=57, 40%), but ethical issues like plagiarism, conflict of interest, confidentiality etc., are neglected. Usually, a reviewer is not informed of the final decision upon the manuscript, and reviews are not shared among reviewers.Somehow surprising was the opinion of the majority of the editors that reviewers must get credit for their efforts (n=121, 85%). On the other hand, editors are not familiar with the concept of open peer review, which can be easily used for that purpose. Some editors believe that open peer review is related to the identity disclosure: both authors’ and reviewers’ (n=35, 25%), reviewers’ (n=27, 19%), and authors’ identity (n=14, 10%). For many editors open peer review implies publicly available reviews (n=65, 36%) and authors’ responses (n=46, 33%). Open peer review is an unknown concept for some editors (n=32, 23%).In spite of all criticism traditional peer review is predominant in Croatian OA journals. Our findings show that traditional peer review is still the preferred review mechanism for the majority of journals in the study.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.