Abstract
The problem of robotic task definition and execution was pioneered by Mason, who defined setpoint constraints where the position, velocity, and/or forces are expressed in one particular task frame for a 6-DOF robot. Later extensions generalized this approach to constraints in 1) multiple frames; 2) redundant robots; 3) other sensor spaces such as cameras; and 4) trajectory tracking. Our work extends tasks definition to 1) expressions of constraints, with a focus on expressions between geometric entities (distances and angles), in place of explicit set-point constraints; 2) a systematic composition of constraints; 3) runtime monitoring of all constraints (that allows for runtime sequencing of constraint sets via, for example, a Finite State Machine); and 4) formal task descriptions, that can be used by symbolic reasoners to plan and analyse tasks. This means that tasks are seen as ordered groups of constraints to be achieved by the robot’s motion controller, possibly with different set of geometric expressions to measure outputs, which are not controlled, but are relevant to assess the task evolution. Those monitored expressions may result in events that trigger switching to another ordered group of constraints to execute and monitor. For these task specifications, formal language definitions are introduced in the JSON-schema modeling language.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have