Features Concept Matching Self-Calibration, Self-Learning Figure 7. Representation of the level at which a common representation is assumed to share a recognition system between users (platforms) or domains. 90 • IEEE ROBOTICS & AUTOMATION MAGAZINE • JUNE 2011 A hybrid approach between sensor-level and feature-level sharing was further proposed by Kunze et al., who demonstrated that sensors can autonomously self-characterize their on-body placement [55] and orientation [56] using machine-learning techniques. They propose to use onbody sensor placement self-characterization as a way to select, among a number of preprogrammed ARCs, the one most suited for the detected sensor placement. Similarly in robotics, data from different sensors can be converted into identical abstract representations. For instance, 3-D point clouds can be measured by stereovision or a laser-range finder. Classifier-Level Sharing Transfer learning allows to translate a classification problem from one feature space to another [57] and was used to transfer perceptual categories across modalities in biological and artificial systems [58]. Conceptually, transfer learning may thus be used to translate the capability to recognize activities from one platform to another without enforcing a similar input space (i.e., sensors and features). Thus, the transfer does not affect higher-level reasoning. Practical principles allowing a system A to confer activity-recognition capabilities to another system B are outlined in [19]. Each system A and B is composed of a set of sensors SA, SB, ARCs ARCA, ARCB, and a unified communication protocol. The process of transfer learning works as follows (see Figure 8). l The user employs an activity-aware system A with ARCA and sensor set SA. For instance, a set of instrumented drawers is capable of reporting which one is being opened or closed in a storage-management scenario. l A new system is deployed in the user’s personal area network comprising a set of unknown new sensors SB (on body and/or in the user’s surroundings) and an untrained ARCB. For instance, the user wears a new sensorized wristband with an integrated acceleration sensor. l As the user performs activities, the ARCA recognizes them and broadcasts this information. l The new system B receives the class labels of the recognized activities. The ARCB incrementally learns the mapping between the signals of the sensor set SB and the activity classes. l Eventually, the systemA can be removed. The activity-recognition capability is now entirely provided by the system B. The underlying assumptions are the two systems that coexist for a longer time to operate transfer learning. In Figure 8, we show that, as the user interacts with a set of drawers, the body-worn system incrementally learns to recognize opening and closing gestures. In robotics, this sharing approach may be used to allow the robots with different sensory inputs to learn to recognize semantically identical activities or to learn how to use a new sensor when the robot parts are upgraded, thus easing programming. Symbolic-Level Sharing The reasoning program to infer higher level activities from spotted action primitives is shared between platforms. As the environment in which the two platforms operate may lead to the detection of semantically different action primitives, a direct transfer of the reasoning is not always possible. Carrying out a prior concept matching can address this. For instance, to reason about the activity of a user, one needs first to know in which room he is located. One environment may have a sensor allowing to detect the action primitive “room door activated.” Another environment may have a proximity infrared sensor allowing to detect “movement in the room.” The interpretation of the sensor data requires different features and classifiers in each case. However, although the classifiers deliver semantically different action primitives, Recognized Activities from Drawer Sensors Recognized Activities from On-Body Sensors