<p>Autonomous vehicles have been a recent trend and active research area from the onset of machine learning and deep learning algorithms. Computer vision and deep learning techniques have simplified the operations of continuous monitoring and decision-making capabilities of autonomous vehicles. A navigation system is facilitated by a visual system, where sensors and collectors process input in form of images or videos, and the navigation system will be making certain decisions to adhere to the safety of drivers and passers-by. This research article contemplates the model of obstacle detection, lane detection, and how the vehicle is supposed to act in terms of autonomous driving situation. This situation should resemble human driving conditions and should ensure maximum safety to both the stakeholders. A unified neural network for detecting lanes, objects, obstacles and to advise the driving speed is defined in this architecture. As far as autonomous driving is considered, these target elements are considered to be the predominant areas of focus for autonomous driving vehicles. Since capturing the images or videos have to be performed in real-time scenarios and processing them for relevant decision making have to be completed at a swift pace, a concept of context tensors is introduced in the decoders for discriminating the tasks based on priority. Every task is associated with the other tasks and also the decision-making process and hence this architecture will continue to learn every day. From the obtained results, it is evident that multitask networks can be improved using the proposed method in terms of accuracy, decision-making capability and reduced computational time. This model investigates the performance using Berkeley deep drive datasets which are considered to be a challenging dataset.</p>