Drowsy driving is a significant contributor to road traffic accidents, necessitating the development of robust and efficient detection systems. Existing approaches for detecting drowsy drivers often rely on single modalities such as image-based facial analysis or vehiclebased metrics, which may lack precision and be susceptible to environmental variations. Furthermore, the existing models might be computationally expensive and result in delayed responses. To address these limitations, this paper proposes a novel Multimodal Iterative Deep Graph Network Wearable Interface that leverages a rich set of data including heart rate, respiration, accelerometer movement, location (speed), traffic, and weather data. These data samples are processed through a fusion of different Recurrent Neural Network processes. The Bidirectional Long Short-Term Memory (BiLSTM) which is used to generate augmented features, is fused with Bidirectional Gated Recurrent Unit (BiGRU) which assists in estimating multidomain feature sets. These features are then used to train an efficient Deep Graph Recurrent Network (DGRN), which facilitates the real-time identification of drowsy drivers. The proposed model significantly enhances the precision of drowsy driver detection by 8.5%, accuracy by 9.5%, recall by 8.3%, and Area Under the Curve (AUC) by 4.9%, while also improving specificity by 5.5% and reducing delay by 10.4%. These improvements collectively contribute to a substantial enhancement in road safety, potentially saving lives by preventing accidents due to drowsy driving. The proposed system, with its real-time processing capability and high accuracy, stands as a significant improvement for identifying drowsy drivers, paving the way for safer and smarter transportation systems.