Simulation Operations Needs Assessment Tool: Development and Validation for Simulation Operations Personnel.

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Simulation operations specialists (SOSs) play a critical yet ill-defined role in healthcare simulation. Despite their critical involvement in operations, orientation, and support of simulation-based education, no validated needs assessment tool existed to guide professional development. This study aimed to design and validate the Simulation Operations Needs Assessment Tool (SONAT) to identify knowledge and skills gaps, guide onboarding, and inform SOS professional development. The SONAT was developed using best practice, community, and peer-reviewed documents, including the Certified Healthcare Simulation Operations Specialist Blueprint, SimGHOSTS Capability Framework, and the Healthcare Simulation Standards of Best Practice. Messick's Unified Validity Theory guided the validation process using Lawshe's method to assess survey items. Following expert feedback and item revision, the tool was disseminated for psychometric analysis. Internal consistency was assessed using Cronbach's alpha for each section of the tool. Content validity indices (CVIs) for the final SONAT exceeded the .70 acceptability threshold (CVI = .738). A sample of 256 SOSs completed the SONAT. Internal consistency was strong across 4 domains (α = .730-.912). Respondents reported high proficiency in manikin-based and task trainer simulation but less in emerging modalities (eg, Augmented Reality/Virtual Reality). Awareness of professional standards was strong, although gaps were noted in knowledge of ethics and leadership opportunities, particularly among noncertified individuals. SONAT collects valid and reliable data when assessing SOS developmental needs across diverse roles and backgrounds. It has the potential to standardize orientation, support tailored professional development, and strengthen skillsets of the evolving SOS profession.

Similar Papers
  • Research Article
  • Cite Count Icon 180
  • 10.1016/j.ecns.2021.08.006
Onward and Upward: Introducing the Healthcare Simulation Standards of Best PracticeTM
  • Sep 1, 2021
  • Clinical Simulation in Nursing
  • Penni I Watts + 10 more

Onward and Upward: Introducing the Healthcare Simulation Standards of Best PracticeTM

  • Research Article
  • Cite Count Icon 1
  • 10.18502/avr.v28i4.1459
The Persian version of infant-toddler meaningful auditory integration scale
  • Sep 21, 2019
  • Auditory and Vestibular Research
  • Saeideh Mehrkian + 3 more

Background and Aim: The current study aim­ed to investigate the validity and reliability of the Persian version of the Infant-Toddler Mean­ingful Auditory Integration Scale (IT-MAIS) questionnaire. Because cochlear implantation is done today at younger ages, the use of a suitable questionnaire is necessary to evaluate auditory skills and follow up the rehabilitation progress. Methods: IT-MAIS was translated according to the International Quality of Life Assessment (IQOLA) translation protocol. The content vali­dity was performed using Lawshe method with the participation of 10 expert professionals. The questionnaire was completed for 34 parents of cochlear-implanted children before initial prog­ramming of the device, two weeks after the reh­abilitation program, and finally three months later. The intraclass correlation coefficient was calculated for test-retest reliability for each IT-MAIS subscale. The internal consistency reli­ability was analyzed using the Cronbach α coe­fficient. Results: The content validity ratio for all items was above 0.79, and the content validity index Background and Aim: The current study aim­ed to investigate the validity and reliability of the Persian version of the Infant-Toddler Mean­ingful Auditory Integration Scale (IT-MAIS) questionnaire. Because cochlear implantation is done today at younger ages, the use of a suitable questionnaire is necessary to evaluate auditory skills and follow up the rehabilitation progress. Methods: IT-MAIS was translated according to the International Quality of Life Assessment (IQOLA) translation protocol. The content vali­dity was performed using Lawshe method with the participation of 10 expert professionals. The questionnaire was completed for 34 parents of cochlear-implanted children before initial prog­ramming of the device, two weeks after the reh­abilitation program, and finally three months later. The intraclass correlation coefficient was calculated for test-retest reliability for each IT-MAIS subscale. The internal consistency reli­ability was analyzed using the Cronbach α coe­fficient. Results: The content validity ratio for all items was above 0.79, and the content validity index was obtained to be higher than 0.96. The Cronbach α for the entire questionnaire was 0.74, and for different sections of it was obtai­ned as 0.63‒0.67. A significant difference was observed between the total score of the ques­tionnaire before and after the rehabilitation pro­gram and its sub-items (p < 0.001). Conclusion: The Persian version of the ques­tionnaire of IT-MAIS is a valid instrument in terms of translation quality as well as reliability and validity for assessing cochlear implant user children who are younger than three years.

  • PDF Download Icon
  • Research Article
  • 10.1186/s41077-024-00305-3
Testing reliability and validity of the Korean version of Debriefing Assessment for Simulation in Healthcare (K-DASH)
  • Aug 8, 2024
  • Advances in Simulation
  • Seon-Yoon Chung + 5 more

BackgroundUse of the Debriefing Assessment for Simulation in Healthcare (DASH©) would be beneficial for novice debriefers with less or no formal training in debriefing. However, the DASH translated into Korean and tested for psychometrics is not yet available. Thus, this study was to develop a Korean version of the DASH student version (SV) and test its reliability and validity among baccalaureate nursing students in Korea.MethodsThe participants were 99 baccalaureate nursing students. Content validity using content validity index (CVI), construct validity using exploratory factor analysis (EFA) and confirmatory factor analysis (CFA), and internal consistency using Cronbach’s alpha coefficient were assessed.ResultsBoth Item-CVIs and Scale-CVI were acceptable. EFA supported the unidimensional latent structure of Korean DASH-SV and results of CFA indicated 6 items converged within the extracted factor, significantly contributing to the factor (p ≤ .05). Items were internally consistent (Cronbach’s α = 0.82).ConclusionThe Korean version of the DASH-SV is arguably a valid and reliable measure of instructor behaviors that could improve faculty debriefing and student learning in the long term.

  • Research Article
  • 10.58742/bmj.v3i3.192
Defining the Scientist: A Consensus-Based Approach
  • Jun 25, 2025
  • Barw Medical Journal
  • João Gama + 37 more

Introduction The term “scientist” lacks a universally accepted definition, reflecting the evolving, interdisciplinary nature of scientific work and posing challenges for recognition, communication, and policy. This study aims to develop consensus-based definitions of the term “scientist” by engaging experienced scholars across diverse fields. Methods This study involved 156 scholars, each with at least 1,000 citations, recruited via convenience sampling. Fourteen scientist definitions, derived from literature and expert input, were assessed using a nine-point Likert scale via a structured google forms survey. The sample size was calculated using G*power (effect size = 0.5, power = 0.95), requiring at least 80 participants. Content Validity Index (CVI) was used for analysis. Definitions scoring ≥0.78 were accepted and included for final analysis, 0.70–0.78 were revised and re-evaluated, and <0.70 were excluded. Participation was voluntary and anonymous, ensuring ethical compliance and confidentiality. Results Of the 14 proposed definitions, six (42.9%) were excluded (CVI < 0.70), seven (50.0%) were accepted (CVI > 0.78), and one (7.1%) underwent revision (CVI 0.70–0.78). The highest-rated definitions were refined into two consensus-based versions: a short definition (“A scientist is a person who conducts research”) and a detailed one emphasizing hypothesis formulation and knowledge dissemination. Final validation yielded CVIs of 0.82 and 0.84, respectively, confirming strong expert agreement on both definitions. Conclusion This study developed two validated definitions of “scientist” emphasizing systematic research and knowledge dissemination. These definitions clarify the concept of scientific identity, providing a flexible yet rigorous framework applicable across academic, interdisciplinary, and policy-making contexts. Introduction The term "scientist" has undergone significant transformation since its inception, reflecting the dynamic nature of scientific inquiry and the evolving landscape of knowledge. This lack of clarity stems from the diverse roles and contributions of individuals in scientific fields, the evolving nature of research, and the interdisciplinary scope of modern science. Historically, figures such as Galileo and Newton were regarded as natural philosophers, a reflection of an earlier framework for knowledge production that has evolved alongside modern scientific advancements. Before twentieth century, the term "scientist" was commonly referred to as a "man of science," "natural philosopher," or by various other designations [1,2]. In contemporary contexts, scientists operate across a broad spectrum of fields, including medicine, biology, chemistry, physics, and social sciences, each employing methodologies tailored to their specific inquiries. For instance, biologists may design experiments to test hypotheses about living organisms, while social scientists might use qualitative methods to explore human behavior [3]. The Science Council defines a scientist as an individual who methodically collects and applies research and evidence to develop hypotheses, performs experiments, and shares results to advance knowledge in their field [4]. While National Cancer Institute defines a scientist as an individual with a background in science, particularly someone actively engaged in a specific area of research [5]. This diversity in practices underscores the challenge of defining "scientist" in a way that captures the breadth of their contributions. The plurality of definitions extends to global organizations and frameworks. For example, the United Nations Educational, Scientific, and Cultural Organization highlights the critical role of scientists in addressing global challenges and promoting sustainable development. This definition broadens the scope to include individuals working in multidisciplinary teams or applying scientific knowledge to public policy and societal issues. Similarly, some academic discussions focus on the characteristics of a scientist, such as curiosity, skepticism, and a commitment to evidence-based conclusions, rather than formal qualifications or job titles [6]. Unlike well-defined professions such as medicine or engineering, where specific educational pathways and professional titles (e.g., "doctor" or "engineer") confer clear identities, the term "scientist" lacks a universally recognized credentialing system. This absence can lead to underrepresentation or misrepresentation of scientific expertise, especially in interdisciplinary and collaborative contexts [7]. For example, the growing integration of data science in biology or physics illustrates the importance of understanding who qualifies as a scientist to ensure effective communication and collaboration among stakeholders. The absence of a standardized definition poses practical challenges for scientific communication, policymaking, and inclusivity. This study aims to address this gap by engaging scholars across disciplines to develop a consensus-based definition of "scientist." By recognizing the diverse and interdisciplinary contributions of scientists, such a definition could enhance collaboration, improve public understanding, and inform policies that support the scientific community. Methods Study design and participants A total of 156 scholars (out of 300 invited) participated in this study. Eligibility was determined based on the scholars' substantial academic expertise, evidenced by the achievement of at least 1,000 citations within their respective fields. This criterion ensured that participants had significant research experience and were highly qualified to contribute to the formulation of a consensus-based definition of "scientist." Participants were recruited through a convenience sampling method, and data were collected via a structured survey administered through google forms. While convenience sampling was used due to the accessibility of high-citation scholars, efforts were made to ensure disciplinary diversity to mitigate potential bias. Personalized invitations were sent via email to each scholar to facilitate their inclusion in the study. Sample size determination The sample size was determined using G*power statistical software (version 3.1.9.7), employing a two-tailed goodness of fit test with an effect size of 0.5, an alpha error probability of 0.05, and a statistical power of 0.95. According to the calculations, a minimum of 80 participants were required to achieve statistically valid results. Consequently, 156 scholars were recruited to participate in the study, ensuring robust representation and adequate statistical power. Data collection Fourteen proposed definitions of "scientist," curated from existing literature and expert contributions, were presented to the enrolled scholars for evaluation (Table 1). Each definition included a Likert scale with nine response options, ranging from "strongly agree" to "strongly disagree." Responses were systematically recorded and compiled in an Excel sheet for subsequent analysis. This process facilitated the systematic capture of scholarly consensus on each definition. Table 1. Respondent Agreement on Various 'Scientist' Definitions. Proposed Definitions Options A person studying or has expert knowledge of one or more natural or physical sciences. (Oxford Dictionary) Strongly Disagree Moderately Disagree Disagree Slightly Disagree Undecided Slightly Agree Moderately Agree Agree Strongly Agree An expert who studies or works in one of the sciences. (Cambridge Dictionary) Strongly Disagree Moderately Disagree Disagree Slightly Disagree Undecided Slightly Agree Moderately Agree Agree Strongly Agree A person learned in science and especially natural science. (Merriam-Webster Dictionary) Strongly Disagree Moderately Disagree Disagree Slightly Disagree Undecided Slightly Agree Moderately Agree Agree Strongly Agree A scientist is someone who systematically gathers and uses research and evidence, to make hypotheses and test them, to gain and share understanding and knowledge. (Science Council) Strongly Disagree Moderately Disagree Disagree Slightly Disagree Undecided Slightly Agree Moderately Agree Agree Strongly Agree A scientist is someone who has studied science and whose job is to teach or do research in science. (Collins Dictionary) Strongly Disagree Moderately Disagree Disagree Slightly Disagree Undecided Slightly Agree Moderately Agree Agree Strongly Agree An expert in science, especially one of the physical or natural sciences. (Dictionary) Strongly Disagree Moderately Disagree Disagree Slightly Disagree Undecided Slightly Agree Moderately Agree Agree Strongly Agree A scientist is a person with some kind of knowledge or expertise in any of the sciences. (Vocabulary dictionary) Strongly Disagree Moderately Disagree Disagree Slightly Disagree Undecided Slightly Agree Moderately Agree Agree Strongly Agree A person who is trained in a science and whose job involves doing scientific research or solving scientific problems. (Britannica Dictionary) Strongly Disagree Moderately Disagree Disagree Slightly Disagree Undecided Slightly Agree Moderately Agree Agree Strongly Agree A person who has studied science, especially one who is active in a particular field of investigation. (National Cancer Institute) Strongly Disagree Moderately Disagree Disagree Slightly Disagree Undecided Slightly Agree Moderately Agree Agree Strongly Agree Someone who works or is trained in science. (Longman Dictionary) Strongly Disagree Moderately Disagree Disagree Slightly Disagree Undecided Slightly Agree Moderately Agree Agree Strongly Agree A person whose profession is investigating in one of the natural sciences. (Your Dictionary) Strongly Disagree Moderately Disagree Disagree Slightly Disagree Undecided Slightly Agree Moderately Agree Agree Strongly Agree A person who is engaged in and has expert knowledge of a science. (Free Dictionary) Strongly Disagree Moderately Disagree Disagree Slightly Disagree Undecided Slightly Agree Moderately Agree Agree Strongly Agree Someone whose job or education is about science. (LanGeek Dictionary) Strongly Disagree Moderately Disagree Disagree Slightly Disagree Undecided Slightly Agree Moderately Agree Agree Strongly Agree A scientist is a person who researches to advance knowledge in an area of the natural sciences. (Wikipedia) Strongly Disagree Moderately Disagree Disagree Slightly Disagree Undecided Slightly Agree Moderately Agree Agree Strongly Agree Data analysis The Content Validity Index (CVI) was employed to assess the relevance and agreement of the definitions. Definitions with a CVI below 0.70 were excluded, as they failed to meet the minimum threshold for consensus. Definitions with a CVI between 0.70 and 0.78 underwent a second round of evaluation, with refined wording sent back to the same scholars for further review. Definitions achieving a CVI above 0.78 were deemed sufficiently valid for inclusion in the final analysis [8]. These definitions formed the foundation for the development of a unified, consensus-based definition of "scientist." Ethical considerations Participation in the study was entirely voluntary, and all responses were anonymized to preserve participant confidentiality. Results Initially, out of the 14 proposed definitions of the term "scientist," six (42.9%) received a CVI score below the threshold of 0.70 and were consequently excluded from further consideration. In contrast, seven definitions (50.0%) demonstrated strong content validity with CVI scores equal to or exceeding 0.78 and were therefore retained for subsequent synthesis and analysis. Only one definition (7.1%) fell within the intermediate range, with a CVI between 0.70 and 0.78 (Table 2). Table 2. Comparison of Agreement and CVI Across Proposed Definitions of 'Scientist'. Proposed Definition Agree Disagree Undecided CVI Status Science Council Definition 146 8 2 0.94 Accepted Britannica Dictionary Definition 136 16 4 0.87 Accepted Cambridge Dictionary 130 20 6 0.83 Accepted Wikipedia Definition 125 27 4 0.80 Accepted Free Dictionary Definition 124 25 7 0.79 Accepted National Cancer Institute Definition 124 25 7 0.79 Accepted Collins Dictionary Definition 122 31 3 0.78 Accepted Oxford Dictionary Definition 120 31 5 0.77 Revised Longman Dictionary Definition 104 42 10 0.67 Excluded Your Dictionary Definition 103 48 5 0.66 Excluded Dictionary (generic) Definition 91 52 13 0.58 Excluded Vocabulary dictionary Definition 88 58 10 0.56 Excluded Merriam-Webster Dictionary Definition 81 64 11 0.52 Excluded LanGeek Dictionary Definition 79 67 10 0.51 Excluded CVI: Content Validity Index, CVI Thresholds: Accepted: ≥ 0.78, Revised: 0.70–0.78, Excluded: < 0.70 Through a rigorous, iterative evaluation process involving expert feedback, the definitions with the highest CVI scores (those above 0.78) were integrated and refined into two distinct, consensus-based definitions of the term "scientist." The first was a concise definition: “A scientist is a person who conducts research.” The second was a more comprehensive and elaborated definition: “A scientist is someone who systematically conducts or gathers and uses research to formulate hypotheses and test them, in order to gain and disseminate understanding and knowledge.” These two final definitions were subsequently circulated among the panel of scholars for a second round of evaluation, during which they were asked to rate the definitions for content validity. The short definition received a CVI of 0.82, while the more detailed definition attained a slightly higher CVI of 0.84, reflecting strong agreement among the experts. Although no additional formal qualitative feedback was solicited at this stage; minor wording adjustments were made based on informal suggestions received during this validation round. Discussion The role of a scientist extends far beyond the stereotypical image of an individual in a white coat working exclusively in a laboratory setting. Careers grounded in scientific expertise are remarkably diverse, encompassing domains such as research, education, industry, and regulatory affairs. The Science Council categorizes scientists into 10 different types, highlighting the diversity of scientific roles beyond the stereotypical lab-based researcher. It includes types such as experimental scientists, theoretical scientists, data scientists, and more, reflecting the broad spectrum of scientific work today [9]. Definitions of the term “scientist” vary, yet they generally converge on the principles of systematic inquiry, evidence-based investigation, and the pursuit of knowledge across various disciplines. For instance, the Oxford Advanced Learner’s Dictionary and the Britannica Dictionary emphasize formal training and research functions, typically within the natural sciences such as biology, chemistry, or physics [10,11]. In contrast, contemporary perspectives, such as those discussed by the American Association for the Advancement of Science in 2024, recognize a broader spectrum of scientific engagement, encompassing both professional researchers and individuals committed to understanding the world through observation, experimentation, and analysis [12]. In light of this diversity, the present study aimed to clarify and formalize the definition of a "scientist" through expert consensus. Two definitions were developed: a concise definition “A scientist is a person who conducts research”, and a comprehensive definition “A scientist is someone who systematically conducts or gathers and uses research to formulate hypotheses and test them, in order to gain and disseminate understanding and knowledge.” These definitions encapsulate the core activities and guiding principles of scientific inquiry, emphasizing both methodological rigor and the essential role of knowledge dissemination across disciplines. A key finding of this study lies in its recognition of the evolving tension between disciplinary specialization and the increasing importance of interdisciplinary collaboration. As highlighted in contemporary analyses of interdisciplinary research and development, scientists now frequently operate at the intersection of multiple fields, such as nanomedicine, where the diversity and dissimilarity of collaborators’ knowledge can significantly enhance research productivity [13]. The concise definition, "A scientist is a person who conducts research" captures this shift by avoiding constraints tied to specific disciplinary boundaries. In contrast, the more detailed definition explicitly incorporates the systematic formulation and testing of hypotheses, along with the dissemination of knowledge, thereby reinforcing the structured and communicative nature of scientific inquiry. These elements align closely with UNESCO’s 2019 call for stronger science-society engagement and underscore the ethical responsibilities inherent in modern scientific practice [14]. The study’s findings also contribute to ongoing debates surrounding professional identity within the scientific community. In contrast to regulated professions such as medicine, the absence of a universal credentialing system for scientists complicates formal recognition, particularly in non-academic and interdisciplinary contexts. This ambiguity is reflected in the National Cancer Institute’s pragmatic definition of a scientist, which emphasizes active participation in research rather than reliance on formal titles or qualifications [15]. By anchoring the term “scientist” in core research activities rather than occupational labels, the consensus-based definitions proposed in this study offer a more inclusive framework. This approach accommodates emerging roles in fields such as data science and applied research, thereby addressing the risk of under recognition in collaborative and cross-sector environments. The dual definitions, concise and comprehensive, offer flexibility for different contexts, a strategy aligned with the Science Council’s emphasis on methodological diversity [4]. The detailed definition’s focus on systematic inquiry and dissemination aligns with studies of interdisciplinary science, where “impassioned commitment” to shared goals drives innovation [13]. Simultaneously, the availability of a concise definition enhances clarity in public discourse and science communication, while the more detailed version provides the specificity necessary for institutional contexts such as policy development, research funding, and professional accreditation. Notably, the study’s findings also challenge enduring stereotypes of the “lone genius” scientist by highlighting the inherently collaborative and iterative nature of scientific practice. Contemporary frameworks, such as those emerging from computational biology, suggest that scientific identity is increasingly dynamic, pluralistic, and shaped by collective knowledge production [16]. The process undertaken in this study, involving successive refinement and expert validation of definitions, closely mirrors the recursive logic of the scientific method itself. This methodological alignment is particularly salient in fields like nutritional epidemiology, where the replication of findings remains a persistent challenge and iterative inquiry is essential for refining evidence [17]. Despite the methodological rigor and expert involvement, several limitations should be acknowledged. First, the study employed convenience sampling, which may introduce selection bias and limit the generalizability of the findings. Although participants were selected based on a minimum citation threshold to ensure scholarly expertise, this criterion may have inadvertently excluded emerging researchers or experts with significant practical contributions who have not yet achieved high citation metrics. Second, the use of an online survey format may have constrained participant engagement, as scholars with limited availability or preference for alternative formats may have been underrepresented. Additionally, response bias cannot be ruled out, as those with a particular interest in the topic or in defining scientific identity may have been more inclined to participate, potentially skewing the results. Future refinements of the definition should also consider voices from non-academic scientific contexts including those in industry, policy, and community-based science who are increasingly central to addressing complex global challenges. Conclusion By engaging experienced scholars across disciplines, this study establishes two validated definitions of “scientist” that emphasize systematic research activity and knowledge dissemination. These definitions offer a structured yet adaptable framework for understanding scientific identity, balancing clarity with flexibility. They help address the ambiguity surrounding the term “scientist,” providing a foundation for improved communication, interdisciplinary collaboration, and evidence-informed policy development. Importantly, they remain open to future refinement as scientific practice continues to evolve. Declarations Conflicts of interest: The authors have no conflicts of interest to disclose. Ethical approval: Not applicable. Patient consent (participation and publication): Not applicable. Funding: The present study received no financial support. Acknowledgements: None to be declared. Authors' contributions: JG, MM, SB, BS, VS, ASN, SHM, HAH, AGH, ADS, RAK, WRR, AB, GB, SS, SN, CJ, PL, MSS, ZK, MC, AM, SK, FCT, FB, FRK, MAM, AA, VK, DH, PM, VRM, MSA, EA, and RV were significant contributors to the conception of the study, voting for the items. FHK, BAA, and AMM were involved in the literature review, manuscript writing, and data analysis and interpretation. FHK and AMM Confirmation of the authenticity of all the raw data. All authors have read and approved the final version of the manuscript. Use of AI: ChatGPT-3.5 was used to assist in language editing and improving the clarity of the manuscript. All content was reviewed and verified by the authors. Authors are fully responsible for the entire content of their manuscript. Data availability statement: Not applicable.

  • Research Article
  • Cite Count Icon 4
  • 10.3233/wor-205297
Development and validation of assessment tool of knowledge, attitude, and practice of outdoor workers regarding heat stress.
  • Mar 25, 2022
  • Work (Reading, Mass.)
  • Mahboobeh Khorsandi + 6 more

Improving the level of knowledge, attitude and practices of workers exposed to heat stress using a suitable tool can be a cheap and effective method. This requires the consideration of personal, environmental and social factors, which, the PRECEDE model is highly applicable for. Thus, the aim of the present study is the development of a tool assessment for measuring the knowledge, attitude and practices of workers in outdoor occupations regarding heat stress exposure using the PRECEDE model. In the present study, a tool was designed and constructed using the PRECEDE model by analyzing the relevant literature and expert opinion. The face validity of the tool was determined based on the opinion of ten experts with experience in the field of occupational weather conditions. The content validity of the tool was determined using the Content Validity Ratio (CVR) and the Content Validity Index (CVI). Cronbach's alpha reliability coefficient was used to determine the reliability of the tool's internal consistency. SPSS version 23 was used for statistical analysis. A PRECEDE based questionnaire was designed with a total of 55 questions consisting of predisposing factors (28 questions for knowledge and 14 questions for attitude), enabling factors (5 questions), reinforcing factors (3 questions) and preventive behaviors (5 questions). The Content Validity Index (CVI) of all questions was above 0.79. The Content Validity Ratio (CVR) of all questions was above 0.62 (Lawshe method). The Cronbach's alpha reliability coefficient of all PRECEDE domains were above the 0.7 acceptable value. Based on the results obtained, all 55 questions were approved and thus the content validity and reliability of this tool was deemed acceptable. Considering the reliability and validity of this tool, its application is recommended in all health and safety inspections within various industries for measuring the heat stress knowledge, attitude and practices of workers engaged in outdoor occupations and also for presenting suitable solutions or preventive measures.

  • Research Article
  • 10.32598/rj.23.1.3283.1
Design and Evaluation of Psychometric Properties of the “Assessment of Social, Emotional, and Behavioral Disorders in Preschool Children With Stuttering Questionnaire” (for Parents)
  • Apr 1, 2022
  • Journal of Rehabilitation
  • Masoomeh Amirkhani + 2 more

Objective: Stuttering is not just a speech disorder. It can cause negative feelings and emotions in a person especially in children. Therefore, we need a standard tool to identify, study and measure these negative effects and then try to eliminate or reduce them. Considering the increased clinical and research needs for a valid and reliable tool to assess the negative effect of stuttering in Iran, this study aims to design and validate a questionnaire (parents form) for assessment of social/emotional /behavioral disorders in preschool children with stuttering Materials & Methods: This is a methodological study that was conducted in three steps in spring and summer of 2018 in Isfahan, Iran. Participants included 60 parents of preschool children with stuttering aged 3 to 5 years and 11 months (36 boys and 24 girls), who had referred to speech therapy clinics in Isfahan. In the first stage, 10 parents received in-depth and open-ended interviews to collect information about social, emotional and behavioral disorders in children with stuttering. Afterwards, based on these information and the opinions of experts in psychology and speech therapy, the main constructs of the questionnaire were identified: "uncompromising/ hyperactive behavior", "social skills", "communication skills", "aggression", "fear", and "separation anxiety". In the second stage, in order to determine the face validity, 10 experts were interviewed face to face. In order to determine the content validity, 15 other experts were asked to examine each item according. Content validity was determined by calculating the content validity ratio (CVR), content validity index (CVI), and total content validity (Lawshe method). Confirmatory factor analysis was used to examine the construct validity. In the last step, the internal consistency was determined by calculating Cronbach's alpha coefficient, and the relaibility was determined by test-retest method and intraclass correlation coefficient (ICC) Results: The initial version consisted of 61 items and 6 subscales. According to the experts, by merging items with overlapping concepts, the number of items were reduced to 40. In the face validity assessment stage, items were examined based on difficulty, relevance, and ambiguity. The phrases were read several times, and the opinions of experts were applied. Then, it was reviewed by two experts in Persian literature and finally was approved. The CVR and CVI vaues were obtained 0.76 and 0.90, respectively. For its internal consistency, the Cronbach's alpha coefficient was obtained 0.89 and ICC was more than 0.7 and significant (P<0.001). For the construct validity assessed in AMOS software, the calculated X2 was 631.25. The low number of X2 indicated a good fit of the model (P<0.000). Conclusion: The designed questionnaire has acceptable validity and reliability and can be used as a reliable and valid tool for assessing social/emotional/behavioral disorders in Iranian preschool children with stuttering.

  • Research Article
  • Cite Count Icon 25
  • 10.1177/1046878120958745
Toward Defining Healthcare Simulation Escape Rooms
  • Oct 14, 2020
  • Simulation & Gaming
  • Mindi Anderson + 4 more

Background. Escape rooms have been adapted from a range of educational purposes across disciplines, including healthcare simulation. The use of this technique has become increasingly popular among industry and faculty members. We sought to clarify the characteristics of healthcare simulation escape rooms in order to work toward a shared mental model and definition. Methods. A scoping review of the literature with an environmental scan of websites and other public information was performed to identify concepts which describe educational and healthcare simulation escape rooms to differentiate between and determine key features and scope of this clinical education tool to provide an interprofessional definition. Twenty-three references were used. Results. Healthcare simulation escape rooms share many of the characteristics of those being used for education and may be utilized for teaching a variety of skills. These may be conducted either within a simulation scenario, within the simulation/simulated environment, and/or with associated equipment. It is essential that the development and implementation of these escape rooms follow design standards of best practice for healthcare simulation for optimal learning. Only one definition of simulation escape rooms was found. Discussion. While similar to escape rooms utilized in other forms of education, there are principal differences between those escape rooms and ones used in healthcare simulation. Key features include utilization of core healthcare simulation principles, including providing a safe and realistic learning environment. Conclusion. Escape rooms may be used to engage learners in a simulation experience. It is important to differentiate between true simulation escape rooms and escape rooms that do not reflect healthcare simulation-based learning experiences. An expanded definition is provided, as reflected by the literature review, to provide a clearer understanding of the term as applied to healthcare simulation and enhance repeatable studies to advance the science of healthcare simulation.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 5
  • 10.15253/2175-6783.20202143694
Validation of content and appearance of an educational manual to promote children’s mental health
  • Jun 18, 2020
  • Rev Rene
  • Marina Nolli Bittencourt + 5 more

Objective: to validate the content and appearance of an educational manual to promote children’s mental health. Methods: this is a methodological study, in which 16 specialists (health and education professionals) and six of the target audience (nurses) participated. The Content Validity Index and the Agreement Index were calculated according to the response to the instrument, which assessed the objectives, appearance, structure, organization, relevance and didactics of the manual. Results: the Content Validity Index was 0.984 for the target audience, and the Agreement Index was 100.0%. The main changes were in the title, objectives and theoretical framework, excluding workshops and spelling and grammatical revision. Conclusion: the manual presented satisfactory Content Validity and Agreement indexes. It can assist nurses and other health professionals in promoting children’s mental health, promoting the development of emotional intelligence.

  • Research Article
  • Cite Count Icon 20
  • 10.1016/j.nedt.2021.104907
Self-debriefing in healthcare simulation: An integrative literature review
  • Apr 17, 2021
  • Nurse Education Today
  • Valorie Mackenna + 4 more

Self-debriefing in healthcare simulation: An integrative literature review

  • Research Article
  • 10.3760/cma.j.issn.1673-677x.2019.01.003
Analysis on effects and course demand of simulation operations specialists in medical simulation education
  • Jan 1, 2019
  • Zhenye Xu + 7 more

Objective To investigate the effect and function of the simulation operations specialists (SOS) in assist healthcare simulation courses. To provide reference for the construction of SOS and the development of healthcare simulation education in China. Methods Based on the data in Ruijin healthcare simulation center, 194 medical simulation instructors from 93 medical institutions or colleges in 26 cities were investigated to understand their knowledge and needs for SOS, and 19 SOSs from 9 medical colleges were also investigated to understand their actual work status and self-evaluation using cellphone online research method. Results Instructors have 5.5(2.0, 9.5)years' teaching experience, but the simulation experience is only 1 (1.0, 1.0) year. 70% instructors are not familiar with SOS, and 85.1%(165/194) need helping in simulated teaching aspects. But the application of SOS is lower than actual demand(χ2= 7.645, P 0.05). A discrepancy exists in SOSs' actual work and responsibility in theory. In all the 19 SOSs, only 4 SOSs have enough communication with instructors before and after courses, 15 SOSs felt workload is very heavy, 14 SOSs think there is lack of enough time or disturbed by other works, 13 SOSs need professional training and only 5 SOSs are satisfied with income while instructors thought SOS should get more salaries (χ2= 19.363, P<0.05). Conclusions As the important resource of the simulation center, SOS directly affects the service ability and teaching quality of the center. The professional qualification training should be promoted actively to improve the professional technical ability for different service demand and promote the communication and understanding between simulation course faculty and SOS. The ratio of SOS and the work demand should be evaluated and improved with increasing the investment of ancillary facility. Simultaneously the establishment of the sense of SOS staff's professional value should be paid more attention and the SOS team building should be seen as the core competence development of the simulation center. Key words: Simulation operations specialists (SOS); Healthcare simulation; Simulation center; Course operation; Instructor map; Train the trainer

  • Research Article
  • 10.58439/hnp.v3i1.311
Validity, Sensitivity and Reliability of HAPUs E-book App for Preventing Incident of Pressure Ulcer on Patients
  • Jan 30, 2025
  • Holistic Nursing Plus
  • Renty Ahmalia

Background: Pressure ulcers, or bedsores, are a global healthcare challenge, impacting patient well-being and placing significant economic burdens on healthcare systems. Despite advances in prevention strategies, hospital-acquired pressure injuries (HAPIs) remain prevalent due to aging populations, chronic conditions, and gaps in healthcare practices. Aims: This study evaluates the validity, reliability, sensitivity, and specificity of the HAPUs E-book app as a digital tool for educating nurses and preventing pressure ulcers. Methods: Using the Content Validity Index (CVI), eight experts (five wound care specialists and three IT experts) assessed the app's content validity. Sensitivity and specificity were calculated from trials involving 30 nurses, while Cronbach's Alpha Coefficient was used to evaluate internal consistency across key components. Results: The app achieved a CVI score of 1.00, indicating strong content validity. Sensitivity was 95.83%, and specificity was 83.33%, demonstrating high diagnostic accuracy. Cronbach's Alpha scores ranged from 0.73 to 0.82, confirming strong internal consistency and reliability. Conclusion: The HAPUs E-book app is a reliable tool for nursing education and pressure ulcer prevention. Its high sensitivity, specificity, and validated content make it suitable for clinical use and continuous professional development. However, further studies with larger populations are needed to generalize findings and assess long-term outcomes. Keywords: Pressure ulcers; Content Validity Index; CVI; mHealth; Validity

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 5
  • 10.3390/informatics7030026
Crossing the Power Line: Using Virtual Simulation to Prepare the First Responders of Utility Linemen
  • Jul 29, 2020
  • Informatics
  • Alaina Herrington + 1 more

Virtual reality (VR) healthcare simulation has helped learners develop skills that are transferable to real-word conditions. Innovative strategies are needed to train workers to improve community safety. The purpose of this pilot project was to evaluate the use of a VR simulation applying the International Nursing Association for Clinical Simulation and Learning (INACSL) Standards of Best Practice: SimulationSM Simulation Design with eight power line workers. Six power industry supervisors and educators assisted in facilitating three VR simulations with eight linemen participants. Kotter’s eight steps to leading change and the INACSL Standards of Best Practice: SimulationSM Simulation Design were utilized in working with energy leaders and VR developers to carry out this pilot project. Pre- and post-implementation surveys demonstrated a 28% improvement in participants’ learning outcomes. All three learning objectives were met. This project demonstrated the successful application of a translational framework and the INACSL Standards of Best Practice: SimulationSM in a VR context in the power industry. This process may be helpful to guide or inspire further adoption of VR in unconventional settings.

  • Front Matter
  • Cite Count Icon 13
  • 10.1002/nur.21713
Under-Appreciated Steps in Instrument Development, Part I: Starting With Validity.
  • Feb 9, 2016
  • Research in nursing & health
  • Margaret H Kearney

Under-Appreciated Steps in Instrument Development, Part I: Starting With Validity.

  • Research Article
  • 10.1111/jep.70041
Development of an Audit Tool to Evaluate End of Life Care in the Emergency Department: A Face and Content Validity Study.
  • Feb 1, 2025
  • Journal of evaluation in clinical practice
  • Melissa Heufel + 8 more

Emergency Departments (ED) are increasingly caring for patients with acute, chronic and terminal conditions requiring End of Life Care (EOLC). There is no published and validated tool available to evaluate EOLC delivery of patients dying in the ED. This study describes the face and content validity testing process to develop, refine and test a new and unique audit tool to evaluate EOLC in the ED. The face and content validation process used a three-round modified-Delphi technique. We consulted 11 experts to assess the proposed 89 items. Face validity explored the overall question of appropriateness and relevance; and content validity examined relevance ratings using the Content Validity Index (CVI) 4-point Likert scale in two rounds. Iterative assessment of ratings led to inclusion (CVI > 0.78), revision (CVI 0.65 to < 0.78) or exclusion (CVI < 0.65) of items from the tool. Of the initial 89 items, 66 were included (CVI > 0.78), 16 items revised (scores 0.65 to < 0.78), seven were removed (scores < 0.65) and two new items suggested. Items covered the constructs patient characteristics, circumstances of death, ED performance, communication and care planning, recognition of dying, care delivery, and needs of families and carers. Scale CVI achieved 0.90. The consolidated list of 81 items achieved acceptable face validity and excellent content validity. Face and content validity of the ED EOLC audit tool achieved acceptable item-CVI scores and an excellent scale-CVI score. We recommend external validation of its components in real-life settings to monitor and set locally relevant clinical practice benchmarks.

  • Research Article
  • Cite Count Icon 443
  • 10.1186/s41077-017-0043-4
The Association of Standardized Patient Educators (ASPE) Standards of Best Practice (SOBP)
  • Jun 27, 2017
  • Advances in Simulation
  • Karen L Lewis + 8 more

In this paper, we define the Association of Standardized Patient Educators (ASPE) Standards of Best Practice (SOBP) for those working with human role players who interact with learners in a wide range of experiential learning and assessment contexts. These human role players are variously described by such terms as standardized/simulated patients or simulated participants (SP or SPs). ASPE is a global organization whose mission is to share advances in SP-based pedagogy, assessment, research, and scholarship as well as support the professional development of its members. The SOBP are intended to be used in conjunction with the International Nursing Association for Clinical Simulation and Learning (INACSL) Standards of Best Practice: SimulationSM, which address broader simulation practices. We begin by providing a rationale for the creation of the ASPE SOBP, noting that with the increasing use of simulation in healthcare training, it is incumbent on ASPE to establish SOBP that ensure the growth, integrity, and safe application of SP-based educational endeavors. We then describe the three and a half year process through which these standards were developed by a consensus of international experts in the field. Key terms used throughout the document are defined. Five underlying values inform the SOBP: safety, quality, professionalism, accountability, and collaboration. Finally, we describe five domains of best practice: safe work environment; case development; SP training for role portrayal, feedback, and completion of assessment instruments; program management; and professional development. Each domain is divided into principles with accompanying key practices that provide clear and practical guidelines for achieving desired outcomes and creating simulations that are safe for all stakeholders. Failure to follow the ASPE SOBP could compromise the safety of participants and the effectiveness of a simulation session. Care has been taken to make these guidelines precise yet flexible enough to address the diversity of varying contexts of SP practice. As a living document, these SOBP will be reviewed and modified periodically under the direction of the ASPE Standards of Practice Committee as SP methodology grows and adapts to evolving simulation practices.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.