Commonsense reasoning refers to the ability to make inferences, draw conclusions, and understand the world based on general knowledge and commonsense. Whether Large Language Models (LLMs) have commonsense reasoning ability remains a topic of debate among researchers and experts. When confronted with multiple-choice commonsense reasoning tasks, humans typically rely on their prior knowledge and commonsense to formulate a preliminary answer in mind. Subsequently, they compare this preliminary answer to the provided choices, and select the most likely choice as the final answer. We introduce Aggregated Semantic Matching Retrieval (ASMR) as a solution for multiple-choice commonsense reasoning tasks. To mimic the process of humans solving commonsense reasoning tasks with multiple choices, we leverage the capabilities of LLMs to first generate the preliminary possible answers through open-ended question which aids in enhancing the process of retrieving relevant answers to the question from the given choices. Our experiments demonstrate the effectiveness of ASMR on popular commonsense reasoning benchmark datasets, including CSQA, SIQA, and ARC (Easy and Challenge). ASMR achieves state-of-the-art (SOTA) performance with a peak of +15.3% accuracy improvement over the previous SOTA on SIQA dataset.