In a comprehensive assessment of ChatGPT and Bard's performance across three key indices—Government AI Readiness, Digital Economy and Society, and UN E-Government Survey, the study delves into nuanced insights regarding their accuracy, adaptability, and readability within the context of Digital Governance. ChatGPT demonstrated a superior accuracy rate of 93.55%, surpassing Bard's performance at 88.57%. Notably, both models exhibited variations in individual and mutual error correction capabilities, particularly evident when faced with confirmation queries. Bard showcased an adjustment post-confirmation, suggesting potential error correction, whereas ChatGPT displayed limited adaptability in similar scenarios. While there was a notable congruence in their responses to Digital Governance content, challenges arose in deciphering complex information, especially concerning sustainability initiatives. Bard generally produced more accessible content, evident in readability metrics, in contrast to ChatGPT's inclination towards using complex language. Both models demonstrated promising alignment in addressing intricate topics within the realm of Digital Governance. The findings emphasize the need for policymakers to critically evaluate the adaptability and accuracy of language models like ChatGPT and Bard when considering their integration into digital governance practices. Awareness of their diverse performance and error correction capabilities is crucial for responsible implementation, ensuring the maximal benefits of AI in public decision-making.