Abstract

Collaboration among robots and human beings has inspired researchers and novelists since a long time ago. Apropos, the word “robot” first appeared in a theatre play (“R.U.R.”, Karel Capek, 1921) referring to an automata character, a slave humanoid. Important advances for control strategies were presented by researchers, applied to service robots, toys and automata vehicles, concerning the interaction with human beings. Over time, manipulator robots were massively used on industrial plants, performing predefined and repetitive tasks. Modern applications for manipulators, involving two or more robots on cooperative tasks, are now arising in industry. Most of the scientific publications on this area present solutions for some aspects involving humans, mainly related to the safety in robots’ workspaces, and the flexibility to fast operate and reconfigure them. However, the way to operate manipulators remains rigidly based on imperative programming, through a HRI (Human-Robot Interface). On the other hand, a new approach proposed by (Brooks, 1986), based on behaviours, allows the definition of reactive models of control applied to mobile robots. The main limitation of this approach is its strictly reactive behaviour, i.e. all knowledge the robot will learn about the environment is unpredictable. Current trends in several research areas are pointing to a possible occurrence of a new singularity, when the mankind will experiment the knowledge disembodiment, i.e. the human knowledge will be retrieved from brain, including its consciousness, and transferred to another place, or machine (Vinge, 2008). Psychologists (Pinker, 1999) defend that the mental states, as well as deliberations and emotions, can be represented by means of symbols of a mental language, known as “Mentalese”. The free representation of signals and symbols for all mental states and their causalities is practically impossible, considering the current state of the art in technology. However, if conceived to specific domains, this can be fetched. Rules and policies for collaborative environments consist of well formed sentences, which describe states, causal relationships and their effects, applied to collaboration among human beings. These rules and policies have been used for several situations, involving computer supported cooperative work (CSCW). This chapter presents and discusses the application of symbolic rules to coordinate collaborative environments with manipulators and humans. It also demonstrates how to express a set of collaborative rules, with common effects for machines and humans. We 7

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call