Deep Reinforcement Learning (DRL) is an essential subfield of Artificial Intelligence (AI), where agents interact with environments to learn policies for solving complex tasks. In recent years, DRL has achieved remarkable breakthroughs in various tasks, including video games, robotic control, quantitative trading, and autonomous driving. Despite its accomplishments, security and privacy-related issues still prevent us from deploying trustworthy DRL applications. For example, by manipulating the environment, an attacker can influence an agent’s actions, misleading it to behave abnormally. Additionally, an attacker can infer private training data and environmental information by maliciously interacting with DRL models, causing a privacy breach. In this survey, we systematically investigate the recent progress of security and privacy issues in the context of DRL. First, we present a holistic review of security-related attacks within DRL systems from the perspectives of single-agent and multi-agent systems and review privacy-related attacks. Second, we review and classify defense methods used to address security-related challenges, including robust learning, anomaly detection, and game theory approaches. Third, we review and classify privacy-preserving technologies, including encryption, differential privacy, and policy confusion. We conclude the survey by discussing open issues and possible directions for future research in this field.