Abstract
Endgame databases can be difficult to use in tree search when database sizes remain large even after compression. Given the same endgame, we discover that the compression ratios vary significantly when using different encoding schemes. The intuition is that when a set of positions mapped into a continuous chunk of segments have similar values, block-based compression libraries such as zlib can yield a better compression ratio than cases where segments contain diversified values. However, finding the optimal encoding scheme by exhaustive enumeration is time-infeasible for endgame databases with a large number of pieces. We propose a novel approach using deep learning to obtain an encoding scheme so that the compression ratio is suitable for practical purposes. Our approach can be applied to chess-like games.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.