Machine learning (ML) methods, particularly Reinforcement Learning (RL), have gained widespread attention for optimizing traffic signal control in intelligent transportation systems. However, existing ML approaches often exhibit limitations in scalability and adaptability, particularly within large traffic networks. This paper introduces an innovative solution by integrating decentralized graph-based multi-agent reinforcement learning (DGMARL) with a Digital Twin to enhance traffic signal optimization, targeting the reduction of traffic congestion and network-wide fuel consumption associated with vehicle stops and stop delays. In this approach, DGMARL agents are employed to learn traffic state patterns and make informed decisions regarding traffic signal control. The integration with a Digital Twin module further facilitates this process by simulating and replicating the real-time asymmetric traffic behaviors of a complex traffic network. The evaluation of this proposed methodology utilized PTV-Vissim, a traffic simulation software, which also serves as the simulation engine for the Digital Twin. The study focused on the Martin Luther King (MLK) Smart Corridor in Chattanooga, Tennessee, USA, by considering symmetric and asymmetric road layouts and traffic conditions. Comparative analysis against an actuated signal control baseline approach revealed significant improvements. Experiment results demonstrate a remarkable 55.38% reduction in Eco_PI, a developed performance measure capturing the cumulative impact of stops and penalized stop delays on fuel consumption, over a 24 h scenario. In a PM-peak-hour scenario, the average reduction in Eco_PI reached 38.94%, indicating the substantial improvement achieved in optimizing traffic flow and reducing fuel consumption during high-demand periods. These findings underscore the effectiveness of the integrated DGMARL and Digital Twin approach in optimizing traffic signals, contributing to a more sustainable and efficient traffic management system.