Abstract
A join order directly affects database query performance and computational overhead. Deep reinforcement learning (DRL) can explore efficient query plans while not exhausting the search space. However, the deep Q network (DQN) suffers from the overestimation of action values in query optimization, which can lead to limited query performance. In addition, ε-greedy exploration is not efficient enough and does not enable deep exploration. Accordingly, in this paper, we propose a dynamic double DQN (DDQN) order selection method(DDOS) for join order optimization. First, the method models the join query as a Markov decision process (MDP), then solves the DRL model by integrating the network model DQN and DDQN weighting into the DRL model’s estimation error problem in query joining, and finally improves the quality of developing query plans. And actions are selected using a dynamic progressive search strategy to improve the randomness and depth of exploration and accumulate a high information gain of exploration. The performance of the proposed method is compared with those of dynamic programming, heuristic algorithms, and DRL optimization methods based on the query set Join Order Benchmark (JOB). The experimental results show that the proposed method can effectively improve the query performance with a favorable generalization ability and robustness, and outperforms other baselines in multi-join query applications.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.