Abstract

AbstractA disease severity index (DSI) is a single number for summarising a large amount of information on disease severity. The DSI has most often been used with data based on a special type of ordinal scale comprising a series of consecutive ranges of defined numeric intervals, generally based on the percent area of symptoms presenting on the specimen(s). Plant pathologists and other professionals use such ordinal scale data in conjunction with a DSI (%) for treatment comparisons. The objective of this work is to explore the effects on both of different scales (i.e. those having equal or unequal classes, or different widths of intervals) and of the selection of values for scale intervals (i.e. the ordinal grade for the category or the midpoint value of the interval) on the null hypothesis test for the treatment comparison. A two‐stage simulation approach was employed to approximate the real mechanisms governing the disease‐severity sampling design. Subsequently, a meta‐analysis was performed to compare the effects of two treatments, which demonstrated that using quantitative ordinal rating grades or the midpoint conversion for the ranges of disease severity yielded very comparable results with respect to the power of hypothesis testing. However, the principal factor determining the power of the hypothesis test is the nature of the intervals, not the selection of values for ordinal scale intervals (i.e. not the mid‐point or ordinal grade). Although using the percent scale is always preferable, the results of this study provide a framework for developing improved research methods where the use of ordinal scales in conjunction with a DSI is either preferred or a necessity for comparing disease severities.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call