Abstract

This paper focuses on inferring a general class of hidden Markov models (HMMs) using data acquired from experts. Expert-acquired data contain decisions/actions made by humans/users for various objectives, such as navigation data reflecting drivers' behavior, cybersecurity data carrying defender decisions, and biological data containing the biologist's actions (e.g., interventions and experiments). Conventional inference methods rely on temporal changes in data without accounting for expert knowledge. This paper incorporates expert knowledge into the inference of HMMs by modeling expert behavior as an imperfect reinforcement learning agent. The proposed method optimally quantifies experts' perceptions about the system model, which, alongside the temporal changes in data, contributes to the inference process. The proposed inference method is derived through a combination of dynamic programming and optimal recursive Bayesian estimation. The applicability of this method is demonstrated to a wide range of inference criteria, such as maximum likelihood and maximum a posteriori. The performance of the proposed method is investigated through a comprehensive numerical experiment using a benchmark problem and biological networks.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.