Abstract

ABSTRACT In this paper, we present an agenda for the research directions we recommend in addressing the issues of realizing and evaluating communication in CPS instruments. We outline our ideas on potential ways to improve (1) generalizability in Human–Human assessment tools and ecological validity in Human–Agent ones; (2) flexible and convenient use of restricted communication options; and (3) an evaluation system of both Human–Human and Human-Agent instruments. Furthermore, in order to demonstrate possible routes for realizing some of our suggestions, we provide examples through an introduction of the features of our own CPS instrument. It is a Human–Human pre-version of a future Human–Agent instrument and a promising diagnostic and research tool in its own right, as well as the first example of transforming the so-called MicroDYN approach so that it is suitable for Human–Human collaboration. We offer new alternatives for communication in addition to pre-defined messages within the test, which are also suitable for automated coding. For example, participants can send or request visual information in addition to verbal messages. As regards evaluation as a hybrid solution, not only are the pre-defined messages proposed as indicators of different CPS skills, but so are a number of behavioural patterns.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call