In contemporary development, autonomous vehicles (AVs) have emerged as a potential solution for sustainable and smart transportation to fulfill the increasing mobility demands whilst alleviating the negative impacts on society, the economy, and the environment. AVs completely depend on a machine to perform driving tasks. Therefore, their quality and safety are critical concerns for driving users. AVs use advanced driver assistance systems (ADASs) that heavily rely on sensors and camera data. These data are processed to execute vehicle control functions for autonomous driving. Furthermore, AVs have a voice communication system (VCS) to interact with driving users to accomplish different hand-free functions. Some functions such as navigation, climate control, media and entertainment, communication, vehicle settings, vehicle status, and emergency assistance have been successfully incorporated into AVs using VCSs. Several researchers have also implemented vehicle control functions using voice commands through VCSs. If a situation occurs when AV has lost control due to malfunctioning or fault in the installed computer, sensors and other associated modules, driving users can control the AV using voice notes to perform some driving tasks such as changing speeds, lanes, breaking, and directing the car to reach a safe condition. Furthermore, driving users need manual control over AV to perform these tasks in some situations, like lane changing or taking an exit due to divergence. These tasks can also be performed with the help of voice commands using VCSs. Therefore, finding the exact voice note used to instruct different actuators in risk situations is crucial. As a result, VCSs can greatly improve safety in critical situations where manual intervention is necessary. AVs’ functions and quality can be significantly increased by integrating a VCS with an ADAS and developing an interactive ADAS. Now, the driver functions are controlled by voice features. Therefore, natural language processing is utilized to extract the features to determine the user’s requirements. The extracted features control the vehicle functions and support driving activities. The existing techniques consume high computation while predicting the user command and causing a reduction in the AVs’ functions. This research issue is overcome by applying the variation continuous input recognition model. The proposed approach utilizes the linear training process that resolves the listening and time-constrained problems and uncertain response issues. The proposed model categorizes the inputs into non-trainable and trainable data, according to the data readiness and listening span. Then, the non-distinguishable data were validated by dividing it into the linear inputs used to improve the response in the AVs. Thus, effectively utilizing training parameters and the data decomposition process minimizes the uncertainty and increases the response rate. The proposed model has significantly improved the exact prediction of users’ voice notes and computation efficiency. This improvement enhances the VCS quality and reliability used to perform hand-free and vehicle control functions. The reliability of these functions ultimately improves the safety of AVs’ driving users and other road users.
Read full abstract