Abstract

Current mobile language learning applications are the latest link in a chain of learning materials that are designed to trigger self-directed and holistic learning experiences. The interactive and visually appealing learning materials provide contextualized input and offer various options for enhancing a learner’s productive and receptive language skills. Turning to practice, the multimodal potential of mobile assisted language learning applications appears far from exhausted: Various mobile Apps use multimodality only in a limited way as they either employ images only for illustrative purposes or exhibit large discrepancies between the ideational meanings of different representative modes.To assess the potential of multimodal meaning creation for mobile learning, my paper reflects on specific semiotic characteristics of verbal and pictorial signs and investigates the semantic relations holding between these modes. The integration of insights from educational and multimodal theory with findings from mobile learning helps to identify a set of intermodal relations that are particularly suited for analyzing text-image links in mobile assisted language learning environments. Applying an empirical lens, the paper investigates patterns of multimodal meaning creation in the vocabulary tasks of one of the most popular language learning Apps, i.e. Duolingo. Finally, my descriptive and empirical findings are summarized and integrated into a set of guidelines on how to exploit text-image links for vocabulary/language learning purposes in mobile environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call