3,218 publications found
Sort by
Analysis of applying a patient safety taxonomy to patient and clinician-reported incident reports during the COVID-19 pandemic: a mixed methods study

BackgroundThe COVID-19 pandemic resulted in major disruption to healthcare delivery worldwide causing medical services to adapt their standard practices. Learning how these adaptations result in unintended patient harm is essential to mitigate against future incidents. Incident reporting and learning system data can be used to identify areas to improve patient safety. A classification system is required to make sense of such data to identify learning and priorities for further in-depth investigation. The Patient Safety (PISA) classification system was created for this purpose, but it is not known if classification systems are sufficient to capture novel safety concepts arising from crises like the pandemic. We aimed to review the application of the PISA classification system during the COVID-19 pandemic to appraise whether modifications were required to maintain its meaningful use for the pandemic context.MethodsWe conducted a mixed-methods study integrating two phases in an exploratory, sequential design. This included a comparative secondary analysis of patient safety incident reports from two studies conducted during the first wave of the pandemic, where we coded patient-reported incidents from the UK and clinician-reported incidents from France. The findings were presented to a focus group of experts in classification systems and patient safety, and a thematic analysis was conducted on the resultant transcript.ResultsWe identified five key themes derived from the data analysis and expert group discussion. These included capitalising on the unique perspective of safety concerns from different groups, that existing frameworks do identify priority areas to investigate further, the objectives of a study shape the data interpretation, the pandemic spotlighted long-standing patient concerns, and the time period in which data are collected offers valuable context to aid explanation. The group consensus was that no COVID-19-specific codes were warranted, and the PISA classification system was fit for purpose.ConclusionsWe have scrutinised the meaningful use of the PISA classification system’s application during a period of systemic healthcare constraint, the COVID-19 pandemic. Despite these constraints, we found the framework can be successfully applied to incident reports to enable deductive analysis, identify areas for further enquiry and thus support organisational learning. No new or amended codes were warranted. Organisations and investigators can use our findings when reviewing their own classification systems.

Open Access
Relevant
Outbreak detection algorithms based on generalized linear model: a review with new practical examples

Public health surveillance serves a crucial function within health systems, enabling the monitoring, early detection, and warning of infectious diseases. Recently, outbreak detection algorithms have gained significant importance across various surveillance systems, particularly in light of the COVID-19 pandemic. These algorithms are approached from both theoretical and practical perspectives. The theoretical aspect entails the development and introduction of novel statistical methods that capture the interest of statisticians. In contrast, the practical aspect involves designing outbreak detection systems and employing diverse methodologies for monitoring syndromes, thus drawing the attention of epidemiologists and health managers. Over the past three decades, considerable efforts have been made in the field of surveillance, resulting in valuable publications that introduce new statistical methods and compare their performance. The generalized linear model (GLM) family has undergone various advancements in comparison to other statistical methods and models. This study aims to present and describe GLM-based methods, providing a coherent comparison between them. Initially, a historical overview of outbreak detection algorithms based on the GLM family is provided, highlighting commonly used methods. Furthermore, real data from Measles and COVID-19 are utilized to demonstrate examples of these methods. This study will be useful for researchers in both theoretical and practical aspects of outbreak detection methods, enabling them to familiarize themselves with the key techniques within the GLM family and facilitate comparisons, particularly for those with limited mathematical expertise.

Open Access
Relevant
New approaches and technical considerations in detecting outlier measurements and trajectories in longitudinal children growth data

BackgroundGrowth studies rely on longitudinal measurements, typically represented as trajectories. However, anthropometry is prone to errors that can generate outliers. While various methods are available for detecting outlier measurements, a gold standard has yet to be identified, and there is no established method for outlying trajectories. Thus, outlier types and their effects on growth pattern detection still need to be investigated. This work aimed to assess the performance of six methods at detecting different types of outliers, propose two novel methods for outlier trajectory detection and evaluate how outliers affect growth pattern detection.MethodsWe included 393 healthy infants from The Applied Research Group for Kids (TARGet Kids!) cohort and 1651 children with severe malnutrition from the co-trimoxazole prophylaxis clinical trial. We injected outliers of three types and six intensities and applied four outlier detection methods for measurements (model-based and World Health Organization cut-offs-based) and two for trajectories. We also assessed growth pattern detection before and after outlier injection using time series clustering and latent class mixed models. Error type, intensity, and population affected method performance.ResultsModel-based outlier detection methods performed best for measurements with precision between 5.72-99.89%, especially for low and moderate error intensities. The clustering-based outlier trajectory method had high precision of 14.93-99.12%. Combining methods improved the detection rate to 21.82% in outlier measurements. Finally, when comparing growth groups with and without outliers, the outliers were shown to alter group membership by 57.9 -79.04%.ConclusionsWorld Health Organization cut-off-based techniques were shown to perform well in few very particular cases (extreme errors of high intensity), while model-based techniques performed well, especially for moderate errors of low intensity. Clustering-based outlier trajectory detection performed exceptionally well across all types and intensities of errors, indicating a potential strategic change in how outliers in growth data are viewed. Finally, the importance of detecting outliers was shown, given its impact on children growth studies, as demonstrated by comparing results of growth group detection.

Open Access
Relevant
Covariate balance-related propensity score weighting in estimating overall hazard ratio with distributed survival data

BackgroundWhen data is distributed across multiple sites, sharing information at the individual level among sites may be difficult. In these multi-site studies, propensity score model can be fitted with data within each site or data from all sites when using inverse probability-weighted Cox regression to estimate overall hazard ratio. However, when there is unknown heterogeneity of covariates in different sites, either approach may lead to potential bias or reduced efficiency. In this study, we proposed a method to estimate propensity score based on covariate balance-related criterion and estimate the overall hazard ratio while overcoming data sharing constraints across sites.MethodsThe proposed propensity score was generated by choosing between global and local propensity score based on covariate balance-related criterion, combining the global propensity score fitted in the entire population and the local propensity score fitted within each site. We used this proposed propensity score to estimate overall hazard ratio of distributed survival data with multiple sites, while requiring only the summary-level information across sites. We conducted simulation studies to evaluate the performance of the proposed method. Besides, we applied the proposed method to real-world data to examine the effect of radiation therapy on time to death among breast cancer patients.ResultsThe simulation studies showed that the proposed method improved the performance in estimating overall hazard ratio comparing with global and local propensity score method, regardless of the number of sites and sample size in each site. Similar results were observed under both homogeneous and heterogeneous settings. Besides, the proposed method yielded identical results to the pooled individual-level data analysis. The real-world data analysis indicated that the proposed method was more likely to find a significant effect of radiation therapy on mortality compared to the global propensity score method and local propensity score method.ConclusionsThe proposed covariate balance-related propensity score in multi-site distributed survival data outperformed the global propensity score estimated using data from the entire population or the local propensity score estimated within each site in estimating the overall hazard ratio. The proposed approach can be performed without individual-level data transfer between sites and would yield the same results as the corresponding pooled individual-level data analysis.

Open Access
Relevant
Exploring the perspectives of selectors and collecters of trial outcome data: an international qualitative study

IntroductionSelecting and collecting data to support appropriate primary and secondary outcomes is a critical step in designing trials that can change clinical practice. In this study, we aimed to investigate who contributes to the process of selecting and collecting trial outcomes, and how these people are involved. This work serves two main purposes: (1) it provides the trials community with evidence to demonstrate how outcomes are currently selected and collected, and (2) it allows people involved in trial design and conduct to pick apart these processes to consider how efficiencies and improvements can be made.MethodsOne-with-one semi-structured interviews, supported by a topic guide to ensure coverage of key content. The Framework approach was used for thematic analysis of data, and themes were linked through constant comparison of data both within and across participant groups. Interviews took place between July 2020 and January 2021. Participants were twenty-nine international trialists from various contributor groups, working primarily on designing and/or delivering phase III pragmatic effectiveness trials. Their experience spanned various funders, trial settings, clinical specialties, intervention types, and participant populations.ResultsWe identified three descriptive themes encompassing the process of primary and secondary outcome selection, collection, and the publication of outcome data. Within these themes, participants raised issues around the following: 1) Outcome selection: clarity of the research question; confidence in selecting trial outcomes and how confidence decreases with increased experience; interplay between different interested parties; how patients and the public are involved in outcome selection; perceived impact of poor outcome selection including poor recruitment and/or retention; and use of core outcome sets. 2) Outcome collection: disconnect between decisions made by outcome selectors and the practical work done by outcome collectors; potential impact of outcome measures on trial participants; potential impact on trial staff workload; and use of routinely collected data. 3) Publication of outcome data: difficulties in finding time to write and revise manuscripts for publication due to time and funding constraints. Participants overwhelmingly focused on the process of outcome selection, a topic they talked about unprompted. When prompted, participants do discuss outcome collection, but poor communication between selectors and collectors at the trial design stage means that outcome selection is rarely linked with the data collection workload it generates.DiscussionPeople involved in the design and conduct of trials fail to connect decisions around outcome selection with data collection workload. Publication of outcome data and effective dissemination of trial results are hindered due to the project-based culture of some academic clinical trial research.

Open Access
Relevant
In health research publications, the number of authors is strongly associated with collective self-citations but less so with citations by others

Objective. This study investigated the associations between the number of authors and collective self-citations versus citations by others.Study design and setting. We analyzed 88,594 health science articles published in 2015 and citations they received until 2020. The main variables were the number of authors, the number of citations by co-authors (collective self-citations), and the number of citations by others.Results. The number of authors correlated more strongly with the number of citations by co-authors than with citations by others (Spearman r 0.31 vs. 0.23; mutually adjusted r 0.26 vs. 0.12). The percentage of self-citations among all citations was 10.6% for single-authored articles, and increased gradually with the number of authors to 34.8% for ≥ 50 authors. Collective self-citations increased the proportion of articles reaching or exceeding 30 total citations by 0.7% for single-authored articles, but by 11.6% for articles written by ≥ 50 authors.Conclusions. If citations by others reflect scientific utility, then another mechanism must explain the excess of collective self-citations observed for multi-authored articles. The results support the hypothesis that the authors’ own motivations explain this excess. The evaluation of scientific utility should also be based on citations by others, excluding collective self-citations.

Open Access
Relevant
An improved multiply robust estimator for the average treatment effect

Background In observational studies, double robust or multiply robust (MR) approaches provide more protection from model misspecification than the inverse probability weighting and g-computation for estimating the average treatment effect (ATE). However, the approaches are based on parametric models, leading to biased estimates when all models are incorrectly specified. Nonparametric methods, such as machine learning or nonparametric double robust approaches, are robust to model misspecification, but the efficiency of nonparametric methods is low.MethodIn the study, we proposed an improved MR method combining parametric and nonparametric models based on the previous MR method (Han, JASA 109(507):1159-73, 2014) to improve the robustness to model misspecification and the efficiency. We performed comprehensive simulations to evaluate the performance of the proposed method.ResultsOur simulation study showed that the MR estimators with only outcome regression (OR) models, where one of the models was a nonparametric model, were the most recommended because of the robustness to model misspecification and the lowest root mean square error (RMSE) when including a correct parametric OR model. And the performance of the recommended estimators was comparative, even if all parametric models were misspecified. As an application, the proposed method was used to estimate the effect of social activity on depression levels in the China Health and Retirement Longitudinal Study dataset.ConclusionsThe proposed estimator with nonparametric and parametric models is more robust to model misspecification.

Open Access
Relevant
Can non-participants in a follow-up be used to draw conclusions about incidences and prevalences in the full population invited at baseline? An investigation based on the Swedish MDC cohort

BackgroundParticipants in epidemiological cohorts may not be representative of the full invited population, limiting the generalizability of prevalence and incidence estimates. We propose that this problem can be remedied by exploiting data on baseline participants who refused to participate in a re-examination, as such participants may be more similar to baseline non-participants than what baseline participants who agree to participate in the re-examination are.MethodsWe compared background characteristics, mortality, and disease incidences across the full population invited to the Malmö Diet and Cancer (MDC) study, the baseline participants, the baseline non-participants, the baseline participants who participated in a re-examination, and the baseline participants who did not participate in the re-examination. We then considered two models for estimating characteristics and outcomes in the full population: one (“the substitution model”) assuming that the baseline non-participants were similar to the baseline participants who refused to participate in the re-examination, and one (“the extrapolation model”) assuming that differences between the full group of baseline participants and the baseline participants who participated in the re-examination could be extended to infer results in the full population. Finally, we compared prevalences of baseline risk factors including smoking, risky drinking, overweight, and obesity across baseline participants, baseline participants who participated in the re-examination, and baseline participants who did not participate in the re-examination, and used the above models to estimate the prevalences of these factors in the full invited population.ResultsCompared to baseline non-participants, baseline participants were less likely to be immigrants, had higher socioeconomic status, and lower mortality and disease incidences. Baseline participants not participating in the re-examination generally resembled the full population. The extrapolation model often generated characteristics and incidences even more similar to the full population. The prevalences of risk factors, particularly smoking, were estimated to be substantially higher in the full population than among the baseline participants.ConclusionsParticipants in epidemiological cohorts such as the MDC study are unlikely to be representative of the full invited population. Exploiting data on baseline participants who did not participate in a re-examination can be a simple and useful way to improve the generalizability of prevalence and incidence estimates.

Open Access
Relevant
Adjusting for Berkson error in exposure in ordinary and conditional logistic regression and in Poisson regression

BackgroundINTEROCC is a seven-country cohort study of occupational exposures and brain cancer risk, including occupational exposure to electromagnetic fields (EMF). In the absence of data on individual exposures, a Job Exposure Matrix (JEM) may be used to construct likely exposure scenarios in occupational settings. This tool was constructed using statistical summaries of exposure to EMF for various occupational categories for a comparable group of workers.MethodsIn this study, we use the Canadian data from INTEROCC to determine the best EMF exposure surrogate/estimate from three appropriately chosen surrogates from the JEM, along with a fourth surrogate based on Berkson error adjustments obtained via numerical approximation of the likelihood function. In this article, we examine the case in which exposures are gamma-distributed for each occupation in the JEM, as an alternative to the log-normal exposure distribution considered in a previous study conducted by our research team. We also study using those surrogates and the Berkson error adjustment in Poisson regression and conditional logistic regression.ResultsSimulations show that the introduced methods of Berkson error adjustment for non-stratified analyses provide accurate estimates of the risk of developing tumors in case of gamma exposure model. Alternatively, and under some technical assumptions, the arithmetic mean is the best surrogate when a gamma-distribution is used as an exposure model. Simulations also show that none of the present methods could provide an accurate estimate of the risk in case of stratified analyses.ConclusionWhile our previous study found the geometric mean to be the best exposure surrogate, the present study suggests that the best surrogate is dependent on the exposure model; the arithmetic means in case of gamma-exposure model and the geometric means in case of log-normal exposure model. However, we could present a better method of Berkson error adjustment for each of the two exposure models. Our results provide useful guidance on the application of JEMs for occupational exposure assessments, with adjustment for Berkson error.

Open Access
Relevant