The successful deployment of Deep learning in several challenging tasks has been translated into complex control problems from different domains through Deep Reinforcement Learning (DRL). Although DRL has been extensively formulated and solved as single-objective problems, nearly all real-world RL problems often feature two or more conflicting objectives, where the goal is to obtain a high-quality and diverse set of optimal policies for different objective preferences. Consequently, the development of Multi-Objective Deep Reinforcement Learning (MODRL) algorithms has gained a lot of traction in the literature. Generally, Evolutionary Algorithms (EAs) have been demonstrated to be scalable alternatives to the classical DRL paradigms when formulated as an optimization problem. Hence it is reasonable to employ Multi-objective Evolutionary Algorithms (MOEAs) to handle MODRL tasks. However, there are several factors constraining the progress of research along this line: first, there is a lack of a general problem formulation of MODRL tasks from an optimization perspective; second, there exist several challenges in performing benchmark assessments of MOEAs for MODRL problems. To overcome these limitations: (i) we present a formulation of MODRL tasks as general multi-objective optimization problems and analyze their complex characteristics from an optimization perspective; (ii) we present an end-to-end framework, termed DRLXBench, to generate MODRL benchmark test problems for seamless running of MOEAs (iii) we propose a test suite comprising of 12 MODRL problems with different characteristics such as many-objectives, degenerated Pareto fronts, concave and convex optimization problems, etc. (iv) Finally, we present and discuss baseline results on the proposed test problems using seven representative MOEAs.