Abstract

The success of grounded language acquisition using perceptual data (e.g., in robotics) is affected by the complexity of both the perceptual concepts being learned and the language describing those concepts. We present methods for analyzing this complexity, using both visual features and entropy-based evaluation of sentences. Our work illuminates core, quantifiable statistical differences in how language is used to describe different traits of objects, and the visual representation of those objects. The methods we use provide an additional analytical tool for research in perceptual language learning.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call