Road transportation is among the global grand challenges affecting human lives, health, society, and economy, caused due to road accidents, traffic congestion, and other transportation deficiencies. Autonomous vehicles (AVs) are set to address major transportation challenges including safety, efficiency, reliability, sustainability, and personalization. The foremost challenge for AVs is to perceive their environments in real-time with the highest possible certainty. Relatedly, connected vehicles (CVs) have been another major driver of innovation in transportation. In this paper, we bring autonomous and connected vehicles together and propose TAAWUN, a novel approach based on the fusion of data from multiple vehicles. The aim herein is to share the information between multiple vehicles about their environments, enhance the information available to the vehicles, and make better decisions regarding the perception of their environments. TAWUN shares, among the vehicles, visual data acquired from cameras installed on individual vehicles, as well as the perceived information about the driving environments. The environment is perceived using deep learning, random forest (RF), and C5.0 classifiers. A key aspect of the TAAWUN approach is that it uses problem specific feature sets to enhance the prediction accuracy in challenging environments such as problematic shadows, extreme sunlight, and mirage. TAAWUN has been evaluated using multiple metrics, accuracy, sensitivity, specificity, and area-under-the-curve (AUC). It performs consistently better than the base schemes. Directions for future work to extend the tool are provided. This is the first work where visual information and decision fusion are used in CAVs to enhance environment perception for autonomous driving.