Despite ground-breaking advancements in robotics, gaming, and other challenging domains, reinforcement learning still faces significant challenges in solving dynamic, open-world problems. Since reinforcement learning algorithms usually perform poorly when exposed to new tasks outside of their data distribution, continuous learning algorithms have drawn significant attention. In parallel with work on lifelong learning algorithms, there is a need for challenging environments, properly planned trials, and metrics to measure research success. In this context, a Deep Asynchronous Autonomous Learning System (DAALS) is proposed in this paper for training a self-driving car in a partially observable environment for real-world scenarios in a continuous state-action space. To cater to three different use cases, three different algorithms were used. To train their agents for learning and upgrading discrete state policies, DAALS used the Asynchronous Advantage Stager Reviewer (AASR) algorithm. To train its agent for continuous state spaces, DAALS also uses an Extensive Deterministic Policy Gradient (EDPG) algorithm. To train the agent in a lifelong form of learning for partially observable environments, DAALS uses a Deep Deterministic Policy Gradient Novel Lifelong Learning Algorithm (DDPGNLLA). The system offers flexibility to the user to train the agents for both discrete and continuous state-action spaces. Compared to previous models in continuous state-action spaces, Deep deterministic policy gradient lifelong learning algorithm outperforms previous models by 46.09%. Furthermore, the Deep Asynchronous Autonomous System tends to outperform all previous reinforcement learning algorithms, making our proposed approach a real-world solution. As DAALS has tested on number of different environments it provides the insights on how modern Artificial Intelligence (AI) solutions can be generalized making it one of the better solutions for AI general domain problems.