Abstract

The recent years have witnessed a surge in application of deep neural networks (DNNs) and reinforcement learning (RL) methods to various autonomous control systems and game playing problems. While they are capable of learning from real-world data and produce adequate actions to various state conditions, their internal complexity does not allow an easy way to provide an explanation for their actions. In this paper, we generate state-action pair data from a trained DNN/RL system and employ a previously proposed nonlinear decision tree (NLDT) framework to decipher hidden simplistic rule sets that interpret the working of DNN/RL systems. The complexity of the rule sets are controllable by the user. In essence, the inherent bilevel optimization procedure that finds the NLDTs is capable of reducing the complexities of the state-action logic to a minimalist and intrepretable level. Demonstrating the working principle of the NLDT method to a revised mountain car control problem, this paper applies the methodology to the lane changing problem involving six critical cars in front and rear in left, middle, and right lanes of a pilot car. NLDTs are derived to have simplistic relationships of 12 decision variables involving relative distances and velocities of the six critical cars. The derived analytical decision rules are then simplified further by using a symbolic analysis tool to provide English-like interpretation of the lane change problem. This study makes a scratch to the issue of interpretability of modern machine learning based tools and it now deserves further attention and applications to make the overall approach more integrated and effective.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call