Kaizen is a Japanese term for continuous improvement (kai ~ change, zen ~ good). In a kaizen task, a respondent makes sequential choices to improve an object's profile, revealing a preference path. Including kaizen tasks in a discrete choice experiment has the advantage of collecting greater preference evidence than pick-one tasks, such as paired comparisons. OBJECTIVE ANDMETHODS: So far, three online discrete choice experiments have included kaizen tasks: the 2020 US COVID-19 vaccination (CVP) study, the 2021 UK Children's Surgery Outcome Reporting (CSOR) study, and the 2023 US EQ-5D-Y-3L valuation (Y-3L) study. In this evidence synthesis, we describe the performance of the kaizen tasks in terms of response behaviors, conditional logit and Zermelo-Bradley-Terry (ZBT) estimates, and their standard errors in each of the surveys. Comparing the CVP and Y-3L, including hold-outs (i.e., attributes shared by all alternatives) seems to reduce positional behavior by half. The CVP tasks excluded multi-level improvements; therefore, we could not estimate logit main effects directly. In the CSOR, only 12 of the 21 logit estimates are significantly positive (p < 0.05), possibly due to the fixed attribute order. All Y-3L estimates are significantly positive, and their predictions are highly correlated (Pearson: logit 0.802, ZBT 0.882) and strongly agree (Lin: logit 0.744, ZBT 0.852) with the paired-comparison probabilities. These discrete choice experiments offer important lessons for future studies: (1) include warm-up tasks, hold-outs, and multi-level improvements; (2) randomize the attribute order (i.e., up-down) at the respondent level; and (3) recruit smaller samples of respondents than traditional discrete choice experiments with only pick-one tasks.
Read full abstract