An electric vehicle charging station (EVCS) infrastructure is the backbone of transportation electrification; however, the EVCS has various vulnerabilities in software, hardware, supply chain, and incumbent legacy technologies such as network, communication, and control. These standalone or networked EVCSs open up large attack surfaces for local or state-funded adversaries. The state-of-the-art approaches are not agile and intelligent enough to defend against and mitigate advanced persistent threats (APT). We propose data-driven model-free digital clones based on multiple independent agents deep reinforcement learning (IADRL) that uses the Twin Delayed Deep Deterministic Policy Gradient (TD3) to efficiently learn the control policy to mitigate the cyberattacks on the controllers of EVCS. Also, the proposed digital clones trained with TD3 are compared against the benchmark Deep Deterministic Policy Gradient (DDPG) agent. The attack model considers the APT designed to malfunction the duty cycles of the EVCS controllers with Type-I low-frequency attacks and Type-II constant attacks. The proposed model restores the EVCS operation under threat incidence in any/all controllers by correcting the control signals generated by the legacy controllers. Our experiments verify the superior control policies and actions of TD3-based clones compared to the DDPG-based clones. Also, the TD3-based controller clones solve the problem of incremental bias, suboptimal policy, and hyperparameter sensitivity of the benchmark DDPG-based digital clones, enforcing the efficient mitigation of the impact of cyberattacks on EVCS controllers.