Abstract

ABSTRACT We present a study examining broadcast British Sign Language (BSL) interpreted weather forecasts. These are filmed against a green screen with a superimposed composite image broadcast including maps and satellite information, etc. that can be indexed. We examine the semiotic resources used when interacting with the available visible on-screen information to the viewing audiences. The forecasters and interpreters tailor their multimodal communicative practice to the sensory ecology (Kusters, 2017) of the audiences they serve. That is to say that, speakers/hearers hear the spoken monolingual linguistic resources while seeing the gestural resources of the forecaster; BSL signers/watchers view the multilingual linguistic resources (both categorical and gradient) and co-sign gestural resources, subsequently watching the gestural resources of the forecaster and the interpreter-presenter. We identify that while similar gestural resources are used by the weather presenters and the in-vision interpreter-presenters, the temporal alignment of the semiotic assemblages (Pennycook & Otsuji, 2017) of linguistic and gestural resources are different. The assumed normative practices of the deaf audience appear to significantly contribute to the consecutive use of semiotic resources that we see presented in BSL by in-vision interpreter-presenters. In addition to simultaneous assemblages, favoured by the weather forecaster presenters, they also create consecutive semiotic assemblages.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call