Abstract

Accurately predicting the travel time between two destinations is an essential aspect of traffic monitoring and facilitating ridesharing services. However, this is a highly complex and challenging task, which involves a multitude of variables that cannot be resolved straightforwardly. Previous studies on travel time prediction have focused on evaluating the duration of individual road segments or specific sub-paths before integrating the necessary time for each sub-path. While this method may provide some insight, it may result in an incorrect or imprecise time estimate. To address this issue, this research aims to utilize machine learning techniques to predict the duration of trips in ride-sharing networks, by utilizing the Uber movement dataset. The proposed system employs Python programming to calculate the distance between the pickup and drop-off locations. Furthermore, the study explores the various factors that affect travel time in a descriptive analysis. This includes examining the impact of traffic congestion, weather conditions, and road construction on travel time. The suggested approach incorporates a robust regression model known as Huber regression to enhance the accuracy of trip duration prediction and increase the precision of the algorithm. The Huber regression model is robust to outliers, making it suitable for the Uber movement dataset, which may contain unexpected and extreme values. The dataset is processed using k-fold cross-validation, which splits the dataset into k subsets, with each subset used for validation once while the remaining subsets used for training the model. However, this approach presents several challenges that need to be addressed, including the difficulties with tracking variables, the need for extensive data transformation due to the diverse data types contained in the dataset, and the challenge of handling unlabeled places during the segmentation of geographical data. Additionally, outliers in the dataset can lead to substantial data differences and affect the model's accuracy. Data normalization is slow due to the time-consuming nature of reading duplicated information. To mitigate these issues, additional study is required to improve the model's layout and address the challenges of working with the Uber movement dataset.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.