Abstract

Objectives: This study sought to determine if children’s auditory environments are accurately captured by the automatic scene classification embedded in cochlear implant (CI) processors and to quantify the amount of electronic device use in these environments. Methods: Seven children with CIs, 36.71 (SD = 11.94) months old, participated in this study. Three of the children were male and four were female. Eleven datalogs, containing outcomes from Cochlear’s™ Nucleus® 6 (Cochlear Corporation, Australia) CI scene classification algorithm, and seven day-long audio recordings collected with a Language ENvironment Analysis (LENA; LENA Research Foundation, USA) recorder were obtained for analysis. Results: Results from the scene classification algorithm were strongly correlated with categories determined through human coding (ICC = .86, CI = [−0.2, 1], F(5,5.1) = 5.9, P = 0.04) but some differences emerged. Scene classification identified more ‘Quiet’ (t(8.2) = 4.1, P = 0.003) than human coders, while humans identified more ‘Speech’ (t(10.6) = −2.4, P = 0.04). On average, 8% (SD = 5.8) of the children’s day was spent in electronic sound, which was primarily produced by mobile devices (39.7%). Discussion: While CI scene classification software reflects children’s natural auditory environments, it is important to consider how different scenes are defined when interpreting results. An electronic sounds category should be considered given how often children are exposed to such sounds.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call