The input of visual reinforcement learning often contains redundant information, which will reduce the decision efficiency and decrease the performance of the agent. To address this issue, task-relevant representations of the input are usually learned, where only task-related information is preserved for the decision making. While in the literature, auxiliary tasks that are constructed by using reward signals, optimal policy or by extracting controllable elements in the input are commonly adopted to learn the task-relevant representations, the methods based on reward signals do not work well in sparse reward environments, the effectiveness of methods using optimal policy relies heavily on the optimality degree of the given policy, and the methods by extracting the controllable elements will ignore the uncontrollable task-relevant information in the input. To alleviate these problems and to learn better task-relevant representations, in this paper, we first encourage the encoder to encode controllable parts in the input by maximizing the conditional mutual information between the representations and agent’s real actions. And then as reward signals are directly related to the underlying tasks , they are used to make more task-related information encoded irrespective of whether such information is controllable or not. Finally, a temporal coherence constraint is incorporated into the whole framework to reduce the task-irrelevant information in the representations. Experiments on Distracting DeepMind Control Suite and autonomous driving simulator CARLA show that our proposed approach can achieve better performances than some state-of-the-art (SOTA) baselines, which demonstrates the method’s effectiveness in enhancing the agent’s decision efficiency and overall performances. Code is available at https://github.com/DMU-XMU/Learning-Task-relevant-Representations-via-Rewards-and-Real-Actions-for-Reinforcement-Learning.git.