Index selection aims to find the optimal index configuration for a certain workload. In recent years, the database committee has attempted to develop an intelligent index advisor for replacing database administrators (DBAs). Although some researches show that applying reinforcement learning (RL) to index selection is a significant and promising direction, many problems still exist in applying the RL agent to the index selection i.e. long training time, unstable performance, and limited state representation. To address these issues, we propose a constraint-guided dynamic RL for index selection. Specifically, our method has three major contributions: (i) To balance the space consumption and index configuration quality, we design a dynamic index state structure to avoid missing effective indexes in limited space consumption. (ii) To accelerate agent training, we design a constraint-guided method to largely reduce useless explorations. (iii) To keep the stability of the RL, we propose the trade-off strategy for value function design instead of using Q_network mindlessly. We test our ACDRL with two open-source benchmarks, TPC-H and TPC-DS. The experimental results show that our method outperforms state-of-the-art methods and could find the optimal index configurations with limited time consumption.
Read full abstract