Statistical learning theory serves as the foundational bedrock of Machine learning (ML), which in turn represents the backbone of artificial intelligence, ushering in innovative solutions for real-world challenges. Its origins can be linked to the point where statistics and the field of computing meet, evolving into a distinct scientific discipline. Machine learning can be distinguished by its fundamental branches, encompassing supervised learning, unsupervised learning, semi-supervised learning, and reinforcement learning. Within this tapestry, supervised learning takes center stage, divided in two fundamental forms: classification and regression. Regression is tailored for continuous outcomes, while classification specializes in categorical outcomes, with the overarching goal of supervised learning being to enhance models capable of predicting class labels based on input features. This review endeavors to furnish a concise, yet insightful reference manual on machine learning, intertwined with the tapestry of statistical learning theory (SLT), elucidating their symbiotic relationship. It demystifies the foundational concepts of classification, shedding light on the overarching principles that govern it. This panoramic view aims to offer a holistic perspective on classification, serving as a valuable resource for researchers, practitioners, and enthusiasts entering the domains of machine learning, artificial intelligence and statistics, by introducing concepts, methods and differences that lead to enhancing their understanding of classification methods.