Electric vehicle (EV) has become one of the most critical components in the smart grid with the applications of the Internet-of-Things (IoT) technologies. Real-time charging control is pivotal to ensure the efficient operation of EVs. However, the charging control performance is limited by the uncertainty of the environment. On the other hand, it is challenging to determine a charging control strategy that is able to optimize multiple objectives simultaneously. In this article, we formulate the EV charging control model as a Markov decision process (MDP) by constructing state, action, transition function, and reward. Then, we propose a deep-reinforcement-learning-based approach: charging control deep deterministic policy gradient (CDDPG) to learn the optimal charging control strategy for satisfying the user’s requirement of battery energy while minimizing the user’s charging expense. We utilize the long short-term memory (LSTM) network that extracts the information of previous energy price to determine the current charging control strategy. Moreover, Gaussian noise is added to the output of the actor network to prevent the agent from sticking into the nonoptimal strategy. In addition, we address the limitation of sparse rewards by using two replay buffers, of which one is used to store the rewards during the charging phase and another is used to store the rewards after charging is completed. The simulation results prove that the CDDPG-based approach outperforms the deep- $Q$ -learning-based approach (DQL) and the deep-deterministic-policy-gradient-based approach (DDPG) in satisfying the user’s requirement for the battery energy and reducing the charging cost.