Abstract

Behavioral models are useful tools in understanding how programs work. Although several inference approaches have been introduced to generate extended finite-state automatons from software execution traces, they suffer from accuracy, flexibility, and decidability issues. In this article, we apply a hybrid technique to use both reinforcement learning and stochastic modeling to generate an extended probabilistic finite state automaton from software traces. Our approach—ReHMM (Reinforcement learning-based Hidden Markov Modelling)—is able to address the problems of inflexibility and un-decidability reported in other state-of-the-art approaches. Experimental results indicate that ReHMM outperforms other inference algorithms.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call