Abstract

ABSTRACTPurpose: Findings from cross-sectional blindness prevalence surveys are at risk of several biases that cause the study estimate to differ from the ‘true’ population prevalence. For example, response bias occurs when people who participate (‘responders’) differ from those who do not (‘non-responders’) in ways that affect prevalence estimates. This study aimed to assess the extent to which response bias is considered and occurs in blindness prevalence surveys in low- and middle-income countries (LMICs).Methods: We searched MEDLINE, EMBASE and Web of Science for cross-sectional blindness prevalence surveys undertaken in LMICs and published 2009–2017. From included studies, we recorded and descriptively analysed details regarding enumeration processes, response, and non-response, including the impact of non-response on results.Results: Most (95%) of the 92 included studies reported a response rate (median 91.7%, inter-quartile range 85.9–95.6%). Approximately half clearly described enumeration processes (49%), and reported at least one strategy to increase the response rate (53%); a quarter (23%) statistically compared responders and non-responders. When differential response was assessed, men were more likely to be non-responders than women. Two-thirds (65%) of the time a sociodemographic difference was found between responders and non-responders, a difference in blindness prevalence was also found. Only 13 studies (14%) commented on implications of non-response on prevalence estimates.Conclusions: Response rates are commonly reported from blindness prevalence surveys, and tend to be high. High response rates reduce—but do not eliminate—the risk of response bias. Assessment and reporting of potential response bias in blindness prevalence surveys could be greatly improved.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call