Epidemiological features of the 2024 pertussis outbreak in Gyeonggi Province, Korea.
In 2024, Korea experienced a nationwide pertussis epidemic, with Gyeonggi Province accounting for nearly one-third of reported cases. This study investigated the epidemiological characteristics of the outbreak and explored the association between vaccination history and healthcare utilization. We analyzed 14,275 pertussis cases reported in Gyeonggi Province in 2024 using de-identified national surveillance data. Comparisons were performed by age group (<20 vs. ≥20 years) and vaccination status (<20 years). The chi-square and Mann-Whitney U tests were used, and effect sizes were assessed using Cramér's V. Of all cases, 89.8% occurred in individuals <20 years, particularly those aged 10-14 years. Children and adolescents were more often involved in clusters and had more identified contacts than adults, whereas adults had higher rates of hospitalization (13.2 vs. 5.9%) and emergency visits (4.4 vs. 0.9%; p<0.001). Among individuals <20 years, hospitalization was more common in the unvaccinated or unknown group (11.7%) than in the fully (5.9%) or partially vaccinated (5.5%) groups (p=0.045). The epidemic was concentrated in school-aged populations, particularly adolescents. While vaccination status showed a limited association with healthcare utilization, individuals who were unvaccinated or had an undocumented vaccination history experienced delayed diagnosis and higher care needs. These findings highlight the importance of strengthening adolescent-focused vaccination strategies and preparedness for future pertussis outbreaks.
- Research Article
168
- 10.1097/inf.0b013e3181a90b16
- Nov 1, 2009
- Pediatric Infectious Disease Journal
The varicella-zoster virus (VZV) vaccine strain may reactivate to cause herpes zoster. Limited data suggest that the risk of herpes zoster in vaccinated children could be lower than in children with naturally acquired varicella. We examine incidence trends, risk and epidemiologic and clinical features of herpes zoster disease among children and adolescents by vaccination status. Population-based active surveillance was conducted among <20 years old residents in Antelope Valley, California, from 2000 through 2006. Structured telephone interviews collected demographic, varicella vaccination and disease histories, and clinical information. From 2000 to 2006, the incidence of herpes zoster among children<10 years of age declined by 55%, from 42 cases reported in 2000 (74.8/100,000 persons; 95% confidence interval [95% CI]: 55.3-101.2) to 18 reported in 2006 (33.3/100,000; 95% CI: 20.9-52.8; P<0.001). During the same period, the incidence of herpes zoster among 10- to 19-year-olds increased by 63%, from 35 cases reported in 2000 (59.5/100,000 persons; 95% CI: 42.7-82.9) to 64 reported in 2006 (96.7/100,000; 95% CI: 75.7-123.6; P<0.02). Among children aged<10 years, those with a history of varicella vaccination had a 4 to 12 times lower risk for developing herpes zoster compared with children with history of varicella disease. Varicella vaccine substantially decreases the risk of herpes zoster among vaccinated children and its widespread use will likely reduce overall herpes zoster burden in the United States. The increase in herpes zoster incidence among 10- to 19-year-olds could not be confidently explained and needs to be confirmed from other data sources.
- Research Article
5
- 10.1128/spectrum.04065-22
- May 16, 2023
- Microbiology Spectrum
ABSTRACTBordetella pertussis, the causative agent of whooping cough, can cause pertussis outbreaks in humans, especially in school-aged children. Here, we performed whole-genome sequencing of 51 B. pertussis isolates (epidemic strain MT27) collected from patients infected during 6 school-associated outbreaks lasting less than 4 months. We compared their genetic diversity with that of 28 sporadic isolates (non-outbreak MT27 isolates) based on single-nucleotide polymorphisms (SNPs). Our temporal SNP diversity analysis revealed a mean SNP accumulation rate (time-weighted average) of 0.21 SNPs/genome/year during the outbreaks. The outbreak isolates showed a mean of 0.74 SNP differences (median, 0; range, 0 to 5) between 238 isolate pairs, whereas the sporadic isolates had a mean of 16.12 SNP differences (median, 17; range 0 to 36) between 378 isolate pairs. A low SNP diversity was observed in the outbreak isolates. Receiver operating characteristic analysis demonstrated that the optimal cutoff value to distinguish between the outbreak and sporadic isolates was 3 SNPs (Youden’s index of 0.90 with a true-positive rate of 0.97 and a false-positive rate of 0.07). Based on these results, we propose an epidemiological threshold of ≤3 SNPs per genome as a reliable marker of B. pertussis strain identity during pertussis outbreaks that span less than 4 months.IMPORTANCEBordetella pertussis is a highly infectious bacterium that easily causes pertussis outbreaks in humans, especially in school-aged children. In detection and investigation of outbreaks, excluding non-outbreak isolates is important for understanding the bacterial transmission routes. Currently, whole-genome sequencing is widely used for outbreak investigations, and the genetic relatedness of outbreak isolates is assessed based on differences in the number of single-nucleotide polymorphisms (SNPs) in the genomes of different isolates. The optimal SNP threshold defining strain identity has been proposed for many bacterial pathogens, but not for B. pertussis. In this study, we performed whole-genome sequencing of 51 B. pertussis outbreak isolates and identified a genetic threshold of ≤3 SNPs per genome as a marker defining the strain identity during pertussis outbreaks. This study provides a useful marker for identifying and analyzing pertussis outbreaks and can serve as a basis for future epidemiological studies on pertussis.
- Abstract
1
- 10.1093/ofid/ofab466.1654
- Dec 4, 2021
- Open Forum Infectious Diseases
BackgroundGiven the disproportionate impact of COVID-19 among racial/ethnic minority groups across the United States on emergency visits, hospitalizations, and deaths, we examined healthcare utilization more broadly for acute respiratory illness (ARI across healthcare settings by racial/ethnic group.MethodsUsing data on 33,992,254 unique nonpharmacy healthcare encounters from the IBM Explorys Electronic Health Record database from January 1, 2020–May 1, 2021, across healthcare settings (ambulatory care or telehealth, emergency department, and hospitalizations) with nonmissing bridged racial/ethnic data. Encounters were classified as ARI based on ICD-10 and SNOMED codes and aggregated by month and US Census region. We estimated the population denominator as the total number of persons by bridged racial/ethnic group with encounters recorded during 2019. We both estimated the rate of ARI visits per 100,000 persons across healthcare settings and the rate ratio of ARI visits to non-ARI visits. We performed comparisons of these values by race/ethnicity, taking White persons as referent, using Poisson generalized estimating equations clustered within geographic regions.ResultsA total of 244,137 (6.5% of 3,745,135) hospitalizations, 237,873 (18% of 1,305,474) emergency visits, and 1,636,383 (5.7% of 28,941,645) ambulatory visits were associated with ARIs. We observed similar rates of ARI visits across race/ethnicity groups in all settings combined and in ambulatory settings, but higher rates of ARI hospitalization among Hispanic persons (IRR [95% CI]: 2.5 [1.7–3.7]) and higher rates of ARI emergency department visits among Black persons (2.5 [1.9–3.2]) (Figure). We also observed differences in the relative proportion of care received for ARI vs. other visits types by setting, for example with Black persons utilizing higher rates of hospital visits for ARI vs non-ARI care (2.2 [1.7–2.7]) but lower rates of ambulatory care for ARI (0.9 [0.7–0.96]).ARI Visits Per 100k PersonsConclusionPopulation rates of ARI visits and relative proportions of ARI vs. non ARI visits differed between racial/ethnic groups by setting. Understanding how utilization of care varies for ARI across settings can inform future monitoring efforts for health equity.DisclosuresAll Authors: No reported disclosures
- Research Article
7
- 10.1007/s40801-019-0153-5
- Apr 24, 2019
- Drugs - Real World Outcomes
BackgroundThe use of psychotropic medications is not uncommon among patients with newly diagnosed cancer. However, the impact of psychotropic polypharmacy on healthcare utilization during the initial phase of cancer care is largely unknown.MethodsWe used a claims database to identify adults with incident breast, prostate, lung, and colorectal cancers diagnosed during 2011–12. Psychotropic polypharmacy was defined as concurrent use of two or more psychotropic medication classes for at least 90 days. A multivariable logistic regression was performed to identify significant predictors of psychotropic polypharmacy. Multivariable Poisson and negative binomial regressions were used to assess the associations between psychotropic polypharmacy and healthcare utilization.ResultsAmong 5604 patients included in the study, 52.6% had breast cancer, 30.6% had prostate cancer, 11.4% had colorectal cancer, and 5.5% had lung cancer. During the year following incident cancer diagnosis, psychotropic polypharmacy was reported in 7.4% of patients, with the highest prevalence among patients with lung cancer (14.4%). Compared with patients without psychotropic polypharmacy during the initial phase of care, patients with newly diagnosed cancer with psychotropic polypharmacy had a 30% higher rate of physician office visits, an 18% higher rate of hospitalization, and a 30% higher rate of outpatient visits. The rate of emergency room visits was similar between the two groups.ConclusionPsychotropic polypharmacy during the initial phase of cancer care was associated with significantly increased healthcare resource utilization, and the proportion of patients receiving psychotropic polypharmacy differed by type of cancer.ImpactFindings emphasize the importance of evidence-based psychotropic prescribing and close surveillance of events causing increased healthcare utilization among patients with cancer receiving psychotropic polypharmacy.Electronic supplementary materialThe online version of this article (10.1007/s40801-019-0153-5) contains supplementary material, which is available to authorized users.
- Research Article
21
- 10.1111/jep.13839
- Mar 26, 2023
- Journal of Evaluation in Clinical Practice
In late 2020, messenger RNA (mRNA) covid-19 vaccines gained emergency authorisation on the back of clinical trials reporting vaccine efficacy of around 95%,1, 2 kicking off mass vaccination campaigns around the world. Within 6 months, observational studies reporting vaccine effectiveness in the "real world" at above 90%, similar to trial results,3-6 became the trusted source of evidence upholding these campaigns. While the contemporary conversation about vaccine effectiveness has turned to waning protection, virus variants, and boosters, there has (with rare exception7) been surprisingly little discussion of the limitations of the methodologies of these early observational studies. The lack of critical discussion is notable, for even highly effective vaccinations could only partially explain the drop in rates of covid-19 cases, hospitalisations, and deaths by mid-2021. For example, by March 2021, cases in the UK and United States had dropped roughly fourfold from the January peak, when the "fully vaccinated" population only reached 20% and 5%, respectively. At the same time, in Israel, cases took longer to drop despite a substantially faster vaccine rollout (Figure 1). The vaccination campaigns in these countries can thus only be part of the story. We are aware of only one article that addresses methodological concerns in non-randomised studies of covid-19 vaccines.7 The author draws attention to potential biases and measurement issues, such as vaccination status misclassification, exposure differences, testing differences, attribution issues, and disease risk factor confounding. Many of these concerns are hard to confirm within specific studies due to data unavailability (e.g., testing differences) or cannot be fixed analytically (e.g., exposure and other unmeasured quantities). In this article, we focus on three major sources of bias for which there is sufficient data to verify their existence, and show how they could substantially affect vaccine effectiveness estimates using observational study designs—particularly retrospective studies of large population samples using administrative data wherein researchers link vaccinations and cases to demographics and medical history. Using the information on how cases were counted in observational studies, and published datasets on the dynamics and demographic breakdown of vaccine administration and background infections, we illustrate how three factors generate residual biases in observational studies large enough to render a hypothetical inefficacious vaccine (i.e., of 0% efficacy) as 50%–70% effective. To be clear, our findings should not be taken to imply that mRNA covid-19 vaccines have zero efficacy. Rather, we use the 0% case so as to avoid the need to make any arbitrary judgements of true vaccine efficacy across various levels of granularity (different subgroups, different time periods, etc.), which is unavoidable when analysing any non-zero level of efficacy. It is also important to note that under hypothetical conditions different from the actual events of early 2021, two of these sources of bias could bias results in the opposite direction, that is, underestimating actual vaccine effectiveness. Finally, to draw more precise conclusions about the impact of these biases on specific published studies, we urge that all code and data available to those studies be made public. In each of our three illustrations, we compare results based on observational study methods against randomised controlled trial (RCT) methods. For each comparison, one side represents a published study while the other is a counterfactual. In each case, we show how the gap between observational and RCT study results is due to a source of bias. The pivotal covid-19 vaccine trials used a primary endpoint of lab-confirmed, symptomatic covid-19.8-11 Not all covid cases, however, factored into the estimate of vaccine efficacy. Investigators did not begin counting cases until participants were at least 14 days (7 days for Pfizer) past completion of the dosing regimen, a timepoint public health officials subsequently termed "fully vaccinated."12 The rationale for excluding cases occurring before the start of this "case-counting window" was not provided in trial protocols–and legitimacy of excluding post-randomisation events has long been debated13—however, one Pfizer post-marketing document states that in the early period post-vaccination, "the vaccine has not had sufficient time to stimulate the immune system."14 In randomised trials, applying the "fully vaccinated" case counting window to both vaccine and placebo arms is easy. But in cohort studies, the case-counting window is only applied to the vaccinated group. Because unvaccinated people do not take placebo shots, counting 14 days after the second shot is simply inoperable. This asymmetry, in which the case-counting window nullifies cases in the vaccinated group but not in the unvaccinated group, biases estimates. As a result, a completely ineffective vaccine can appear substantially effective—48% effective in the example shown in Table 1. (The placebo data in Table 1 comes from the Pfizer Phase III randomised trial, and is the assumed case counts for the unvaccinated group in a counterfactual observational study occurring simultaneously; this setup illustrates the potential size of a case-counting window bias in a real-world setting as well as why this bias does not exist in a randomised trial.). We are aware of just one observational study3 that addressed case-counting window bias, by using matching and designating a pseudo-study enrolment date for the unvaccinated party in each matched pair of vaccinated and unvaccinated persons. While matching mitigates case-counting window bias, this method injects an artificial and severe age bias between unvaccinated and vaccinated groups: the matched subset underrepresented patients ≥ 70 years by 50% while over-representing patients ≤ 40 years by 50%. (This occurred because the propensity to receive the vaccine is highly influenced by age. Therefore, the number of one-to-one matched pairs of elderly patients is upper bounded by the number of unvaccinated elderly while the number of one-to-one matched pairs of younger patients is upper bounded by the number of vaccinated young.). In retrospective studies using large population samples, we propose a simple adjustment that can correct for case-counting window bias. The case rate from vaccination to the start of the case-counting window can be observed from the vaccinated group and applied to the unvaccinated group to estimate the number of cases to be excluded before computing the relative ratio of cases. This adjustment preserves the case-counting window, while assuming the vaccine is completely ineffective before its start. Because we use the 0% efficacy assumption, this simple adjustment returns the vaccine effectiveness estimate back to zero. A similar strategy has proved useful in influenza treatment analyses.16 Age is perhaps the most influential risk factor in medicine, affecting nearly every health outcome. Thus, great care must be taken in studies comparing vaccinated and unvaccinated to ensure that the groups are balanced by age. Failure to do so may lead to inaccurate estimates of vaccine effectiveness when the difference in outcomes can be explained, at least partially, by age bias. In trials, randomisation helps ensure statistically identical age distributions in vaccinated and unvaccinated groups, so that the average vaccine efficacy estimate is unbiased, even if vaccine efficacy and/or infection rates differ across age groups (see Figure 2A). However, unlike trials, in real life, vaccination status is not randomly assigned (see Figure 2B). While vaccination rates are high in many countries, the vaccinated remain, on average, older and less healthy than the unvaccinated because vaccines were prioritised for those older and at higher risk. Individuals also self-select for vaccination regardless of policy. Because covid-19 related risks (of infection, disease, and complications) also vary by age, this can confound the estimate of vaccine effectiveness. To illustrate this, consider the REACT-1 study.18 This study conducts PCR testing for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) on a random sample of England's population once a month. In June–July 2021 (the most recent data available), SARS-CoV-2 positivity rates varied considerably by age (from 1.7 to 15.6 positives per 1000 individuals), with higher rates among people under 25 years of age (see Figure 2C). REACT-1 also reports vaccination status. As seen in Figure 2B, almost half of the unvaccinated group is aged between 5 and 12, while the most common age group in the vaccinated was 45–54 years old. While details differ, age bias is present in all observational data sets. To understand the impact of age bias, consider a hypothetical vaccine with zero efficacy. The vaccinated and unvaccinated groups' case rates should be statistically identical if the vaccine were completely ineffective (Figure 2D). But age bias in observational data alters the age-weighted case rates in both the vaccinated and the unvaccinated groups, resulting in different infection rates by vaccination status. Since older people recorded lower infection rates, the age-weighted case rate of the (older) vaccinated group registered at 5.5 per 1000 while the corresponding value for the (younger) unvaccinated group was 11.2 per 1000 (Figure 2C). The resultant vaccine effectiveness, which is the relative ratio of these case rates, reflects the interaction between differential age distributions and the correlation of covid-19 incidence with age. The vaccine effectiveness appears as 51% even though the vaccine is completely ineffective by assumption. (Note that the direction of the age bias would reverse if older age groups had suffered higher case rates during the study period.). A viable adjustment method for this instance of Simpson's paradox19 induced by age bias should shift 51% back to zero. Simpson's paradox describes the condition in which aggregated and disaggregated analyses of the same data lead to contradictory findings, a common phenomenon in real-world data. Many observational studies incorporate an age term into regression models in an attempt to correct this age bias.4, 20, 21 But it has been discovered in a meta-analysis of influenza vaccine studies that standard regression adjustments insufficiently correct for the variety and magnitude of biases.22 From December 2020, the speedy dissemination of vaccines, particularly in wealthier nations (Figure 1), coincided with a period of plunging infection rates. However, accurately determining the contribution of vaccines to this decline is far from straightforward. Indeed, the considerable variation in case decline by country, such as the time lag observed in Israel—by far the quickest to reach 50% vaccinated relative to the UK and the United States—defies simple explanation (Figure 1, timepoint "B"). The sharp drop in infections complicates estimating vaccine effectiveness from observational data in a manner similar to age bias. The risk of virus exposure was considerably higher in January than in April. Thus exposure time was not balanced between unvaccinated and vaccinated individuals. Exposure time for the unvaccinated group was heavily weighted towards the early months of 2021 while the inverse pattern was observed in the vaccinated group. This imbalance is inescapable in the real world due to the timing of vaccination rollout. In addition, unlike trials, individuals in "real-world" studies do not stay in a single analysis subgroup throughout the study period: each person is unvaccinated on the first day of the study until the day of vaccination (or the end of the study should the person remain unvaccinated). Instead of crudely categorising individuals as either "vaccinated" or "unvaccinated," many observational studies split each person's exposure time into an unvaccinated period followed by a vaccinated period if the individual got vaccinated.4-6 This technique is essential in contexts where the vast majority of the population becomes vaccinated, to avoid losing a comparison population. However, this procedure injects a strong bias into the analysis subgroups because the unvaccinated exposure time is heavily skewed to the early period in a study while the exposure time for vaccinated people skews towards the end of the study period. For a hypothetical vaccine with zero efficacy, the case rates for vaccinated and unvaccinated should be equal during each week of the study period. Indeed in RCTs, changes in background infection rate do not bias estimates of vaccine efficacy because by design, vaccine and placebo arms follow a synchronised dosing schedule that ensures exposure (at-risk) time is balanced, even in the context of changing infection rates. But background infection rate bias can cause estimates of vaccine efficacy in "real world" studies to vary widely from 0%. For example, using infection rate data from an actual observational study of Danish nursing home residents,20 where infection rates rapidly declined simultaneous with vaccine rollout (from 12 per 1000 residents in December 2020, to almost 0 during the last 2 weeks of the study),20 vaccine effectiveness of a hypothetically ineffective vaccine appears as 67%, an illusion chiefly created because unvaccinated people were preferentially exposed to the earlier weeks of higher background infection rates (Figure 3). We note that the direction of this bias would reverse if the background infection rate were to have steadily risen during the study period (i.e., vaccinating into a wave rather than out of one). The Danish study was one of the first "real-world" studies to recognise this background infection rate bias. The researchers added a "calendar time" adjustment term to their Cox regression model to address this bias, which reduced their estimate of vaccine effectiveness from 96% to 64%.20 However, as with age bias, we believe that regression adjustment is unlikely to sufficiently cure this type of imbalance. Because the regression equation was not published, we could not make a more definitive assessment. A recent commentary discussed multiple factors that can bias estimates of covid-19 vaccine effectiveness, such as vaccination status misclassification, testing differences, and disease risk factor confounding.7 Our article complements these observations by providing examples based on actual data sets that quantify how case-counting window bias, age bias, and background infection rate bias can profoundly complicate the analysis of observational studies, shifting covid-19 vaccine effectiveness estimates by an absolute magnitude as high as 50% to 70%. Randomised trials aim to mitigate these biases by virtue of design features, such as randomisation, placebo controls, and blinding. But while randomised trials should offer far superior protection against these biases, premarketing trials left many important questions unstudied, such as the durability of protection, interaction with other countermeasures, and effectiveness in highest-risk and other important subpopulations. Pragmatic, placebo-controlled randomised trials might have addressed some of these limitations, but after manufacturers began unblinding their trials following the emergency use authorisation in December 2020, observational studies are all we have. Our analysis shows that real-world conditions such as non-randomised vaccination, crossovers, and trends in background infection rates introduce strong, complex biases into these observational datasets. Our contribution is to size up three important biases, the magnitude of which surprised us and may surprise you. We conclude that "real-world" studies using methodologies popular in early 2021 overstate vaccine effectiveness. Our finding highlights how difficult it is to conduct high-quality observational studies during a pandemic. While the current situation leaves much to be desired, several steps can be taken going forward to enhance the quality of observational studies. Greater awareness of these biases could promote more appropriate adjustments in future studies, including using quasi-experimental methods. In addition, journal editors could improve transparency and reproducibility of observational studies by requiring the disclosure of underlying data and code, as well as publishing modelling equations, tables of coefficients, and standard errors.23 Data availability severely restricted our choice of studies to examine, and also prevented us from analysing all three biases simultaneously, among the ones we selected. As shown in Table 2, we would have needed additional information, such as (a) cases from first dose by vaccination status; (b) age distribution by vaccination status; (c) case rates by vaccination status by age group; (d) match rates between vaccinated and unvaccinated groups on key matching variables; (e) background infection rate by week of study; and (f) case rate by week of study by vaccination status. In future work, we hope to analyse examples using hospitalisations or deaths as endpoints, which is possible only with broader data disclosure. The pandemic offers a magnificent opportunity to recalibrate our expectations about both observational and randomised studies. "Real world" studies today are still published as one-off, point-in-time analyses. But much more value would come from having results posted to a website with live updates, as epidemiological and vaccination data accrue. Continuous reporting would allow researchers to demonstrate that their analytical methods not only explain what happened during the study period but also generalise beyond it. Finally, randomised studies should not be considered irrelevant in the post-authorisation phase. An element of randomisation can be incorporated into real world vaccine distribution. Where populations are still largely unvaccinated and resources do not allow vaccinating everybody at once, designs such as the stepped-wedge cluster randomised rollout24, 25 should be given serious consideration for their ability to ethically derive important scientific information. Any tool that eliminates some amount of real-world bias would reduce the complexity of analysing observational data. Kaiser Fung and Peter Doshi came up with the idea for the paper, Kaiser Fung carried out the statistical analyses and wrote the first draft. All authors were involved in discussing the content, presentation, and editing the manuscript. We have the following interests to declare: Peter Doshi has received travel funds from the European Respiratory Society (2012) and Uppsala Monitoring Center (2018); grants from the FDA (through University of Maryland M-CERSI; 2020), Laura and John Arnold Foundation (2017-22), American Association of Colleges of Pharmacy (2015), Patient-Centered Outcomes Research Institute (2014-16), Cochrane Methods Innovations Fund (2016-18), and UK National Institute for Health Research (2011-14); was an unpaid IMEDS steering committee member at the Reagan-Udall Foundation for the FDA (2016-2020), and is an editor at The BMJ. KF, MJ: None. Data sharing is not applicable to this article as no new data were created or analysed in this study.
- Research Article
2
- 10.1002/pbc.29141
- May 18, 2021
- Pediatric Blood & Cancer
Therapy for childhood acute lymphoblastic leukemia (ALL) is associated with substantial health care utilization and burden on families. Little is known about health care utilization during specific treatment phases. We identified children with ALL diagnosed during 2002-2012 in Ontario, Canada and treated according to Children's Oncology Group (COG) protocols. Disease and treatment data were chart abstracted. Population-based health care databases identified all outpatient visits, emergency department (ED) visits, and hospitalizations. In addition to comparing standard and intensified versions of treatment phases, we compared patients receiving different steroids (dexamethasone vs. prednisone) and different versions of interim maintenance (IM) (Capizzi vs. high-dose methotrexate [HD-MTX]). Six hundred thirty-seven children met inclusion criteria. During intensified consolidation, 76.2% of patients were hospitalized at least once, compared to only 32.3% of patients receiving standard consolidation (p<.0001). Similarly, 72.9% of patients receiving intensified delayed intensification (DI) were hospitalized during this phase compared to 50.3% of patients receiving standard DI (p<.0001). Among patients receiving a four-drug induction, those receiving dexamethasone had an 85% higher rate of ED visits (adjusted rate ratio [aRR] 1.85, 95th confidence interval [95CI] 1.14-3.00; p=.01) and a 44% higher rate of hospitalization (aRR 1.44, 95CI 1.24-1.68) compared to those receiving prednisone. Among high-risk B-ALL and T-ALL patients in IM, Capizzi MTX was not associated with an increased rate of ED visits versus HD-MTX. These results can be used to inform anticipatory guidance for families, particularly those undergoing intensified therapy. Our results also suggest that increased toxicity rates associated with dexamethasone during Induction seen in clinical trials reflect real-world practice.
- Research Article
4
- 10.4178/epih.e2023008
- Dec 21, 2022
- Epidemiology and Health
OBJECTIVESWe compared the viral cycle threshold (Ct) values of infected patients to better understand viral kinetics by vaccination status during different periods of variant predominance in Gyeonggi Province, Korea.METHODSWe obtained case-specific data from the coronavirus disease 2019 (COVID-19) surveillance system, Gyeonggi in-depth epidemiological report system, and Health Insurance Review & Assessment Service from January 2020 to January 2022. We defined periods of variant predominance and explored Ct values by analyzing viral sequencing test results. Using a generalized additive model, we performed a nonlinear regression analysis to determine viral kinetics over time.RESULTSCases in the Delta variant’s period of predominance had higher viral shedding patterns than cases in other periods. The temporal change of viral shedding did not vary by vaccination status in the Omicron-predominant period, but viral shedding decreased in patients who had completed their third vaccination in the Delta-predominant period. During the Delta-predominant and Omicron-predominant periods, the time from symptom onset to peak viral shedding based on the E gene was approximately 2.4 days (95% confidence interval [CI], 2.2 to 2.5) and 2.1 days (95% CI, 2.0 to 2.1), respectively.CONCLUSIONSIn one-time tests conducted to diagnose COVID-19 in a large population, although no adjustment for individual characteristics was conducted, it was confirmed that viral shedding differed by the predominant strain and vaccination history. These results show the value of utilizing hundreds of thousands of test data produced at COVID-19 screening test centers.
- Research Article
- 10.1016/j.ijcha.2025.101748
- Jul 12, 2025
- International Journal of Cardiology. Heart & Vasculature
Sex differences in hospitalisation and healthcare utilisation for patients with atrial fibrillation Middeldorp et al. Sex differences in healthcare utilisation and AF
- Research Article
25
- 10.1038/srep13182
- Aug 17, 2015
- Scientific Reports
There is limited information on the roles of different age groups during pertussis outbreaks. Little is known about vaccine effectiveness against pertussis infection (both clinically apparent and subclinical), which is different from effectiveness against reportable pertussis disease, with the former influencing the impact of vaccination on pertussis transmission in the community. For the 2012 pertussis outbreak in Minnesota, we estimated odds ratios for case counts in pairs of population groups before vs. after the epidemic’s peak. We found children aged 11–12y, 13–14y and 8–10y experienced the greatest rates of depletion of susceptible individuals during the outbreak’s ascent, with all ORs for each of those age groups vs. groups outside this age range significantly above 1, with the highest ORs for ages 11–12y. Receipt of the fifth dose of DTaP was associated with a decreased relative role during the outbreak’s ascent compared to non-receipt [OR 0.16 (0.01, 0.84) for children aged 5, 0.13 (0.003, 0.82) for ages 8–10y, indicating a protective effect of DTaP against pertussis infection. No analogous effect of Tdap was detected. Our results suggest that children aged 8–14y played a key role in propagating this outbreak. The impact of immunization with Tdap on pertussis infection requires further investigation.
- Research Article
2
- 10.1177/1357633x221107999
- Jun 22, 2022
- Journal of Telemedicine and Telecare
Introduction Previous studies have had mixed findings about the effects of telemedicine on health care utilization. We designed this study to assess differences in health care utilization between ever users of telemedicine for chronic disease specialty care compared to propensity-matched controls. Methods This observational study of usual care in the Alaska Tribal Health System evaluated telemedicine use (videoconsultation) and healthcare utilization using data from the electronic medical record between 1 January 2015 and 30 June 2019. Eligibility criteria included: age 18 and older, chronic condition diagnosis, and residing in one of four study regions. Cases had ever used telemedicine while controls had not. We used propensity score matching to achieve covariate balance between cases and controls, and then estimated the effect of telemedicine on outcomes using multivariable models. Outcomes included rates of hospitalizations, outpatient visits, and emergency department visits. Results Cases (ever users of telemedicine) had higher hospitalization rates (rate ratio 1.31, p < 0.01) and higher outpatient visit rates (rate ratio 1.23, p < 0.01). Cases had lower rates of emergency department visits, though non-statistically significant (rate ratio 0.87, p = 0.07). Cases were more likely than controls to have no emergency department visits per follow-up time (49% vs 36%, p < 0.01). Discussion We found higher rates of inpatient and outpatient health care utilization in people who had ever used telemedicine compared to propensity-matched controls, with potentially lower rates of emergency department visits. These findings contribute to the literature on telemedicine and should be considered in the context of other factors influencing telemedicine use and outcomes.
- Abstract
- 10.1016/j.chest.2022.08.1401
- Oct 1, 2022
- Chest
HEALTH CARE RESOURCE UTILIZATION IN THE YEAR PRIOR TO DEATH AMONG PATIENTS WITH LUNG CANCER AND COPD IN SASKATCHEWAN
- Research Article
7
- 10.1080/03007995.2019.1574460
- Mar 15, 2019
- Current Medical Research and Opinion
Objective: Although disease-related malnutrition has prognostic implications for patients with chronic obstructive pulmonary disease (COPD), its health-economic impact and clinical burdens are uncertain. We conducted a population-level study to investigate these questions.Methods: We excerpted data relevant to malnutrition, prolonged mechanical ventilation and medications from claims by 1,197,098 patients which were consistent with COPD and registered by the Taiwan National Health Insurance Administration between 2009 and 2013. These patients were separated into cohorts with or without respiratory failure requiring long-term mechanical ventilation, and each cohort was divided to compare cases who developed malnutrition after their first diagnosis consistent with COPD, versus non-malnourished propensity-score matched controls.Results: The prevalence of malnutrition was 3.8% overall (10,259/287,000 non-ventilator-dependent; 1198/15,829 ventilator-dependent). Propensity-score matched non-ventilator-dependent patients who became malnourished (N = 10,242) had comparatively more hospitalizations, emergency room and outpatient visits, longer hospitalization (all p < .01), and higher mortality (HR = 2.26, 95% CI 2.18–2.34) than non-malnourished controls (N = 40,968). Malnourished ventilator-dependent patients (N = 1197) had higher rates of hospitalization, emergency room and outpatient visits, but shorter hospitalization (all p < .001) and lower mortality (HR = 0.85, 95% CI 0.80–0.93) than matched non-malnourished controls (N = 4788). Total medical expenditure on malnourished non-ventilator-dependent COPD patients was 75% higher than controls (p < .001), whereas malnourished ventilator-dependent patients had total costs 7% lower than controls (p < .001).Conclusions: Malnourishment among COPD patients who were not dependent on mechanical ventilation was associated with greater healthcare resource utilization and higher aggregate medical costs.
- Research Article
10
- 10.1136/bmjopen-2018-025521
- May 17, 2019
- BMJ Open
ObjectiveTwo pertussis outbreaks occurred in Olmsted County, Minnesota, during 2004–2005 and 2012 (5–10 times higher than other years), with significantly higher incidence than for the State. We aimed to assess...
- Research Article
- 10.1007/s44197-024-00234-4
- May 15, 2024
- Journal of Epidemiology and Global Health
BackgroundPertussis, a highly contagious, vaccine-preventable respiratory infection caused by Bordetella pertussis, is a leading global public health issue. Ethiopia is currently conducting multiple pertussis outbreak investigations, but there is a lack of comprehensive information on attack rate, case fatality rate, and infection predictors. This study aimed to measure attack rates, case fatality rates, and factors associated with pertussis outbreak.MethodsThis study conducted a systematic review and meta-analysis of published and unpublished studies on pertussis outbreaks in Ethiopia from 2009 to 2023, using observational study designs, using the guideline Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). The study utilized databases like Science Direct, MEDLINE/PubMed, African Journals Online, Google Scholar and registers. The data were collected using an Excel Spreadsheet and then exported to STATA version 17 for analysis. Subgroup analysis was conducted to identify potential disparities. A random effects model was used to consider heterogeneity among studies. I2-squared test statistics were used to assess heterogeneity. The attack rate, case fatality rate, and odds ratio (OR) were presented using forest plots with a 95% confidence interval. Egger’s and Begg’s tests were used to evaluate the publication bias.ResultsSeven pertussis outbreak investigations with a total of 2824 cases and 18 deaths were incorporated. The pooled attack and case fatality rates were 10.78 (95% CI: 8.1–13.5) per 1000 population and 0.8% (95% CI: 0.01–1.58%), respectively. The highest and lowest attack rates were in Oromia (5.57 per 1000 population and in the Amhara region (2.61 per 1000 population), respectively. Predictor of pertussis outbreak were being unvaccinated [odds ratio (OR) = 3.05, 95% CI: 1.83–4.27] and contact history [OR = 3.44, 95% CI: 1.69–5.19].ConclusionHigher and notable variations in attack and case fatality rates were reported. Being unvaccinated and having contact history were the predictors of contracting pertussis disease in Ethiopia. Enhancing routine vaccination and contact tracing efforts should be strengthened.
- Research Article
8
- 10.1093/pch/pxab092
- Mar 1, 2022
- Paediatrics & Child Health
Kawasaki disease (KD) is a common childhood vasculitis with increasing incidence in Canada. Acute KD hospitalizations are associated with high health care costs. However, there is minimal health care utilization data following initial hospitalization. Our objective was to determine rates of health care utilization and costs following KD diagnosis. We used population-based health administrative databases to identify all children (0 to 18 years) hospitalized for KD in Ontario between 1995 and 2018. Each case was matched to 100 nonexposed comparators by age, sex, and index year. Follow-up continued until death or March 2019. Our primary outcomes were rates of hospitalization, emergency department (ED), and outpatient physician visits. Our secondary outcomes were sector-specific and total health care costs. We compared 4,597 KD cases to 459,700 matched comparators. KD cases had higher rates of hospitalization (adjusted rate ratio 2.07, 95%CI 2.00 to 2.15), outpatient visits (1.30, 95%CI 1.28 to 1.33), and ED visits (1.22, 95%CI 1.18 to 1.26) throughout follow-up. Within 1 year post-discharge, 717 (15.6%) KD cases were re-hospitalized, 4,587 (99.8%) had ≥1 outpatient physician visit and 1,695 (45.5%) had ≥1 ED visit. KD cases had higher composite health care costs post-discharge (e.g., median cost within 1 year: $2466 CAD [KD cases] versus $234 [comparators]). Total health care costs for KD cases, respectively, were $13.9 million within 1 year post-discharge and $54.8 million throughout follow-up (versus $2.2 million and $23.9 million for an equivalent number of comparators). Following diagnosis, KD cases had higher rates of health care utilization and costs versus nonexposed children. The rising incidence and costs associated with KD could place a significant burden on health care systems.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.