Answering natural language question over a knowledge base is an important and challenging task with a wide range of application in natural language processing and information retrieval. Several existing knowledge-based question answering systems exploit complex end-to-end neural network approaches that are computationally expensive and take long to execute when training the neural network. More importantly, such an end-to-end approach makes it difficult to examine the process of query processing. In this study, we decompose the question answering problem in a three-step pipeline of entity detection, entity linking, and relation prediction, and solve each component separately. We explore basic neural network and non-neural network methods for entity detection and relation prediction plus a few heuristics for entity linking. We also introduce a method to identify ambiguity in the data and show that ambiguity in the data bounds the performance of the question answering system. The experiment on the SimpleQuestions benchmark data set shows that a combination of basic LSTMs, GRUs, and non-neural network techniques achieve reasonable performance while providing an opportunity to understand the question answering problem structure.