Abstract

Chess game position analysis is important in improving one’s game. One method requires entry of logged moves into a chess engine which is cumbersome and error prone. A quick and effective method for analysis is hence strongly desired. Our hypothesis was that a faster chess game entry method can be built using a combination of vision and machine learning techniques to be analyzed directly in a chess engine. To test the hypothesis, we developed the Augmented Reality Chess Analyzer (ARChessAnalyzer), a complete pipeline from a live image capture of a physical chess game, to board and piece recognition, to move analysis and finally to augmented reality overlay of the position and best move on the physical board. The chess position predictor is like a scene predictor - it is an ensemble of traditional image and vision techniques and image classifier for chess board recognition and Convolutional Neural Network (CNN) for chess piece recognition. ARChessAnalyzer was used - as the input mechanism - to compare against manual entry into StockFish which was also the chess engine used in the app. The results validate the hypothesis that both for sparse and dense chessboard populations, ARChessAnalyzer was faster than manual entry with p-value < 0.005. This app and technologies underneath will help chess learners improve their game and hopefully will be widely used in chess clubs.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call