Distinguishing between spontaneous and posed smiles from videos poses a significant challenge in pattern classification literature. Researchers have developed feature-based and deep learning-based solutions for this problem. To this end, deep learning outperforms feature-based methods. However, certain aspects of feature-based methods could improve deep learning methods. For example, previous research has shown that Duchenne Marker (or D-Marker) features from the face play a vital role in spontaneous smiles, which can be useful to improve deep learning performances. In this study, we propose a deep learning solution that leverages D-Marker features to improve performance further. Our multi-task learning framework, named DeepMarkerNet, integrates a transformer network with the utilization of facial D-Markers for accurate smile classification. Unlike past methods, our approach simultaneously predicts the class of the smile and associated facial D-Markers using two different feed-forward neural networks, thus creating a symbiotic relationship that enriches the learning process. The novelty of our approach lies in incorporating supervisory signals from the pre-calculated D-Markers (instead of as input in previous works), harmonizing the loss functions through a weighted average. In this way, our training utilizes the benefits of D-Markers, but the inference does not require computing the D-Marker. We validate our model’s effectiveness on four well-known smile datasets: UvA-NEMO, BBC, MMI facial expression, and SPOS datasets, and achieve state-of-the-art results.