Abstract

Some plan recognition approaches represent knowledge about the agents under observation in the form of a plan library. Although such approaches use conceptually similar plan library representations, they seldom, if ever, use the exact same domain in order to directly compare their performance. For any non-trivial domain, such plan libraries have complex structures representing possible agent behavior, so plan recognition approaches often fail to be tested at their limits and only rarely are they compared with each other experimentally, leading to the need for a principled approach to evaluating them. In order to address this shortcoming, we develop a mechanism to automatically generate arbitrarily complex plan libraries, such plan library generation can be directed through a number of parameters to allow for systematic experimentation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call